WorldWideScience

Sample records for model key features

  1. Key Features of oxypnictides

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Key Features of oxypnictides. Metal –insulator boundary. Quasi 2D structure. Doping important. Charge transfer (redox behaviour). Normal resistivity : mohms (little higher in oxypnictides). Near antiferromagnetic ground state. Superexchange through Fe-As-Fe (Cu-O-Cu) ...

  2. A study of key features of random atmospheric disturbance models for the approach flight phase

    Science.gov (United States)

    Heffley, R. K.

    1977-01-01

    An analysis and brief simulator experiment were performed to identify and classify important features of random turbulence for the landing approach flight phase. The analysis of various wind models was carried out within the context of the longitudinal closed-loop pilot/vehicle system. The analysis demonstrated the relative importance of atmospheric disturbance scale lengths, horizontal versus vertical gust components, decreasing altitude, and spectral forms of disturbances versus the pilot/vehicle system. Among certain competing wind models, the analysis predicted no significant difference in pilot performance. This was confirmed by a moving base simulator experiment which evaluated the two most extreme models. A number of conclusions were reached: attitude constrained equations do provide a simple but effective approach to describing the closed-loop pilot/vehicle. At low altitudes the horizontal gust component dominates pilot/vehicle performance.

  3. The Progressive BSSG Rat Model of Parkinson's: Recapitulating Multiple Key Features of the Human Disease.

    Directory of Open Access Journals (Sweden)

    Jackalina M Van Kampen

    Full Text Available The development of effective neuroprotective therapies for Parkinson's disease (PD has been severely hindered by the notable lack of an appropriate animal model for preclinical screening. Indeed, most models currently available are either acute in nature or fail to recapitulate all characteristic features of the disease. Here, we present a novel progressive model of PD, with behavioural and cellular features that closely approximate those observed in patients. Chronic exposure to dietary phytosterol glucosides has been found to be neurotoxic. When fed to rats, β-sitosterol β-d-glucoside (BSSG triggers the progressive development of parkinsonism, with clinical signs and histopathology beginning to appear following cessation of exposure to the neurotoxic insult and continuing to develop over several months. Here, we characterize the progressive nature of this model, its non-motor features, the anatomical spread of synucleinopathy, and response to levodopa administration. In Sprague Dawley rats, chronic BSSG feeding for 4 months triggered the progressive development of a parkinsonian phenotype and pathological events that evolved slowly over time, with neuronal loss beginning only after toxin exposure was terminated. At approximately 3 months following initiation of BSSG exposure, animals displayed the early emergence of an olfactory deficit, in the absence of significant dopaminergic nigral cell loss or locomotor deficits. Locomotor deficits developed gradually over time, initially appearing as locomotor asymmetry and developing into akinesia/bradykinesia, which was reversed by levodopa treatment. Late-stage cognitive impairment was observed in the form of spatial working memory deficits, as assessed by the radial arm maze. In addition to the progressive loss of TH+ cells in the substantia nigra, the appearance of proteinase K-resistant intracellular α-synuclein aggregates was also observed to develop progressively, appearing first in the

  4. A study of key features of the RAE atmospheric turbulence model

    Science.gov (United States)

    Jewell, W. F.; Heffley, R. K.

    1978-01-01

    A complex atmospheric turbulence model for use in aircraft simulation is analyzed in terms of its temporal, spectral, and statistical characteristics. First, a direct comparison was made between cases of the RAE model and the more conventional Dryden turbulence model. Next the control parameters of the RAE model were systematically varied and the effects noted. The RAE model was found to possess a high degree of flexibility in its characteristics, but the individual control parameters are cross-coupled in terms of their effect on various measures of intensity, bandwidth, and probability distribution.

  5. Key features of the IPSL ocean atmosphere model and its sensitivity to atmospheric resolution

    Energy Technology Data Exchange (ETDEWEB)

    Marti, Olivier; Braconnot, P.; Bellier, J.; Brockmann, P.; Caubel, A.; Noblet, N. de; Friedlingstein, P.; Idelkadi, A.; Kageyama, M. [Unite Mixte CEA-CNRS-UVSQ, IPSL/LSCE, Gif-sur-Yvette Cedex (France); Dufresne, J.L.; Bony, S.; Codron, F.; Fairhead, L.; Grandpeix, J.Y.; Hourdin, F.; Musat, I. [Unite Mixte CNRS-Ecole Polytechnique-ENS-UPCM, IPSL/LMD, Paris Cedex 05 (France); Benshila, R.; Guilyardi, E.; Levy, C.; Madec, G.; Mignot, J.; Talandier, C. [unite mixte CNRS-IRD-UPMC, IPLS/LOCEAN, Paris Cedex 05 (France); Cadule, P.; Denvil, S.; Foujols, M.A. [Institut Pierre Simon Laplace des Sciences de l' Environnement (IPSL), Paris Cedex 05 (France); Fichefet, T.; Goosse, H. [Universite Catholique de Louvain, Institut d' Astronomie et de Geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Krinner, G. [Unite mixte CNRS-UJF Grenoble, LGGE, BP96, Saint-Martin-d' Heres (France); Swingedouw, D. [CNRS/CERFACS, Toulouse (France)

    2010-01-15

    This paper presents the major characteristics of the Institut Pierre Simon Laplace (IPSL) coupled ocean-atmosphere general circulation model. The model components and the coupling methodology are described, as well as the main characteristics of the climatology and interannual variability. The model results of the standard version used for IPCC climate projections, and for intercomparison projects like the Paleoclimate Modeling Intercomparison Project (PMIP 2) are compared to those with a higher resolution in the atmosphere. A focus on the North Atlantic and on the tropics is used to address the impact of the atmosphere resolution on processes and feedbacks. In the North Atlantic, the resolution change leads to an improved representation of the storm-tracks and the North Atlantic oscillation. The better representation of the wind structure increases the northward salt transports, the deep-water formation and the Atlantic meridional overturning circulation. In the tropics, the ocean-atmosphere dynamical coupling, or Bjerknes feedback, improves with the resolution. The amplitude of ENSO (El Nino-Southern oscillation) consequently increases, as the damping processes are left unchanged. (orig.)

  6. Several Key Features of Marriage in Kosovo

    OpenAIRE

    Dr.Sc. Hamdi Podvorica

    2014-01-01

    In this paper titled “Several key features of marriage in Kosovo”, I have made efforts to address the matrimony, as an important societal and legal concept, in the light of positive law in Kosovo. In short terms, I have addressed the historical development of marriage in general, from the period of promiscuity until today, and I have emphasized key features of marriage in various time periods, only to comprehend better the ways of development of marriage in time and space. A special empha...

  7. Several Key Features of Marriage in Kosovo

    Directory of Open Access Journals (Sweden)

    Dr.Sc. Hamdi Podvorica

    2014-02-01

    Full Text Available In this paper titled “Several key features of marriage in Kosovo”, I have made efforts to address the matrimony, as an important societal and legal concept, in the light of positive law in Kosovo. In short terms, I have addressed the historical development of marriage in general, from the period of promiscuity until today, and I have emphasized key features of marriage in various time periods, only to comprehend better the ways of development of marriage in time and space. A special emphasis is put on the essential (material conditions of marital union. The paper provides sufficient reasons for which the positive law in Kosovo has provided on the free expression of will of spouses; opposite sexes; the age threshold; entry into matrimony before a competent state authority, and under a procedure provided by law, as substantial conditions for entering a valid matrimony. Sufficient room is allowed also for the treatment of consequences and responsibilities of various entities if marriage is developed without obeying substantial conditions as provided by law. Due to the nature of the paper, formal conditions for entering matrimony are not addressed. The right to enter marriage and establish a family under provided legal conditions is guaranteed to every Kosovo citizen, as a substantial right. The marriage is a basic cell of the family, and as such, it is protected by the state and society. Apart from normative and sociological methods, I have also used the historical method in developing this paper. The purpose was to discover several marriage features, which used to exist, and do not anymore, and also underline some new features, which nowadays form the pillars of the marriage.

  8. Chow-Liu trees are sufficient predictive models for reproducing key features of functional networks of periictal EEG time-series.

    Science.gov (United States)

    Steimer, Andreas; Zubler, Frédéric; Schindler, Kaspar

    2015-09-01

    Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20-30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow-Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals. Copyright © 2015 Elsevier Inc. All rights

  9. Intersection of Feature Models

    NARCIS (Netherlands)

    van den Broek, P.M.

    In this paper, we present an algorithm for the construction of the intersection of two feature models. The feature models are allowed to have "requires" and "excludes" constraints, and should be parent-compatible. The algorithm is applied to the problem of combining feature models from stakeholders

  10. Merging Feature Models

    NARCIS (Netherlands)

    van den Broek, P.M.; Galvao, I.; Noppen, J.A.R.

    2010-01-01

    In this paper, we consider the problem of merging feature models which consist of trees with "requires" and "excludes" constraints. For any two such feature models which are parent-compatible, their merge is defined to be the smallest parent-compatible feature model which has all products of the

  11. A zebrafish model of X-linked adrenoleukodystrophy recapitulates key disease features and demonstrates a developmental requirement for abcd1 in oligodendrocyte patterning and myelination.

    Science.gov (United States)

    Strachan, Lauren R; Stevenson, Tamara J; Freshner, Briana; Keefe, Matthew D; Miranda Bowles, D; Bonkowsky, Joshua L

    2017-09-15

    X-linked adrenoleukodystrophy (ALD) is a devastating inherited neurodegenerative disease caused by defects in the ABCD1 gene and affecting peripheral and central nervous system myelin. ABCD1 encodes a peroxisomal transmembrane protein required for very long chain fatty acid (VLCFA) metabolism. We show that zebrafish (Danio rerio) Abcd1 is highly conserved at the amino acid level with human ABCD1, and during development is expressed in homologous regions including the central nervous system and adrenal glands. We used TALENs to generate five zebrafish abcd1 mutant allele lines introducing premature stop codons in exon 1, as well as obtained an abcd1 allele from the Zebrafish Mutation Project carrying a point mutation in a splice donor site. Similar to patients with ALD, zebrafish abcd1 mutants have elevated VLCFA levels. Interestingly, we found that CNS development of the abcd1 mutants is disrupted, with hypomyelination in the spinal cord, abnormal patterning and decreased numbers of oligodendrocytes, and increased cell death. By day of life five abcd1 mutants demonstrate impaired motor function, and overall survival to adulthood of heterozygous and homozygous mutants is decreased. Expression of human ABCD1 in oligodendrocytes rescued apoptosis in the abcd1 mutant. In summary, we have established a zebrafish model of ALD that recapitulates key features of human disease pathology and which reveals novel features of underlying disease pathogenesis. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Key Features of the Manufacturing Vision Development Process

    DEFF Research Database (Denmark)

    Dukovska-Popovska, Iskra; Riis, Jens Ove; Boer, Harry

    2005-01-01

    of companies going through the initial phases of the methodology, this research identified the key features of the Manufacturing Vision Development process. The paper elaborates the key features by defining them, discussing how and when they can appear, and how they influence the process....

  13. Hemodialysis Key Features Mining and Patients Clustering Technologies

    Directory of Open Access Journals (Sweden)

    Tzu-Chuen Lu

    2012-01-01

    Full Text Available The kidneys are very vital organs. Failing kidneys lose their ability to filter out waste products, resulting in kidney disease. To extend or save the lives of patients with impaired kidney function, kidney replacement is typically utilized, such as hemodialysis. This work uses an entropy function to identify key features related to hemodialysis. By identifying these key features, one can determine whether a patient requires hemodialysis. This work uses these key features as dimensions in cluster analysis. The key features can effectively determine whether a patient requires hemodialysis. The proposed data mining scheme finds association rules of each cluster. Hidden rules for causing any kidney disease can therefore be identified. The contributions and key points of this paper are as follows. (1 This paper finds some key features that can be used to predict the patient who may has high probability to perform hemodialysis. (2 The proposed scheme applies k-means clustering algorithm with the key features to category the patients. (3 A data mining technique is used to find the association rules from each cluster. (4 The mined rules can be used to determine whether a patient requires hemodialysis.

  14. Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas

    2015-01-01

    This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....

  15. Clafer: Unifying Class and Feature Modeling

    DEFF Research Database (Denmark)

    Bąk, Kacper; Diskin, Zinovy; Antkiewicz, Michal

    2015-01-01

    We present Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. We designed Clafer as a concise notation for meta-models, feature models, mixtures of meta- and feature models (such as components with options), and models that couple feature...... models and meta-models via constraints (such as mapping feature configurations to component configurations or model templates). Clafer allows arranging models into multiple specialization and extension layers via constraints and inheritance. We identify several key mechanisms allowing a meta......-modeling language to express feature models concisely. Clafer unifies basic modeling constructs, such as class, association, and property, into a single construct, called clafer. We provide the language with a formal semantics built in a structurally explicit way. The resulting semantics explains the meaning...

  16. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  17. A preclinical orthotopic model for glioblastoma recapitulates key features of human tumors and demonstrates sensitivity to a combination of MEK and PI3K pathway inhibitors

    Directory of Open Access Journals (Sweden)

    Rajaa El Meskini

    2015-01-01

    Full Text Available Current therapies for glioblastoma multiforme (GBM, the highest grade malignant brain tumor, are mostly ineffective, and better preclinical model systems are needed to increase the successful translation of drug discovery efforts into the clinic. Previous work describes a genetically engineered mouse (GEM model that contains perturbations in the most frequently dysregulated networks in GBM (driven by RB, KRAS and/or PI3K signaling and PTEN that induce development of Grade IV astrocytoma with properties of the human disease. Here, we developed and characterized an orthotopic mouse model derived from the GEM that retains the features of the GEM model in an immunocompetent background; however, this model is also tractable and efficient for preclinical evaluation of candidate therapeutic regimens. Orthotopic brain tumors are highly proliferative, invasive and vascular, and express histology markers characteristic of human GBM. Primary tumor cells were examined for sensitivity to chemotherapeutics and targeted drugs. PI3K and MAPK pathway inhibitors, when used as single agents, inhibited cell proliferation but did not result in significant apoptosis. However, in combination, these inhibitors resulted in a substantial increase in cell death. Moreover, these findings translated into the in vivo orthotopic model: PI3K or MAPK inhibitor treatment regimens resulted in incomplete pathway suppression and feedback loops, whereas dual treatment delayed tumor growth through increased apoptosis and decreased tumor cell proliferation. Analysis of downstream pathway components revealed a cooperative effect on target downregulation. These concordant results, together with the morphologic similarities to the human GBM disease characteristics of the model, validate it as a new platform for the evaluation of GBM treatment.

  18. Model plant key measurement points

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The key measurement points for the model low enriched fuel fabrication plant are described as well as the measurement methods. These are the measurement points and methods that are used to complete the plant's formal material balance. The purpose of the session is to enable participants to: (1) understand the basis for each key measurement; and (2) understand the importance of each measurement to the overall plant material balance. The feed to the model low enriched uranium fuel fabrication plant is UF 6 and the product is finished light water reactor fuel assemblies. The waste discards are solid and liquid wastes. The plant inventory consists of unopened UF 6 cylinders, UF 6 heels, fuel assemblies, fuel rods, fuel pellets, UO 2 powder, U 3 O 8 powder, and various scrap materials. At the key measurement points the total plant material balance (flow and inventory) is measured. The two types of key measurement points-flow and inventory are described

  19. NAD+ supplementation normalizes key Alzheimer's features and DNA damage responses in a new AD mouse model with introduced DNA repair deficiency

    DEFF Research Database (Denmark)

    Hou, Yujun; Lautrup, Sofie; Cordonnier, Stephanie

    2018-01-01

    Emerging findings suggest that compromised cellular bioenergetics and DNA repair contribute to the pathogenesis of Alzheimer’s disease (AD), but their role in disease-defining pathology is unclear. We developed a DNA repair-deficient 3xTgAD/Polβ+/− mouse that exacerbates major features of human AD...

  20. On some key features of Ada : Language and programming environment

    Science.gov (United States)

    Wehrum, R. P.; Hoyer, W.; Dießl, G.

    1986-08-01

    The present paper focuses upon those aspects of the Ada language whose purpose is to support the discipline of software engineering. It illustrates the use of Ada features for various forms of abstraction, separate compilation, exception handling and tasking and highlights the importance of separating the definition of a module interface from its implementation. It demonstrates the use of the package concept to realize information hiding, data encapsulation and abstract data types. Some key aspects of Ada numerics are dealt with briefly. The paper continues by providing an overview of the Ada programming environments, their history and their relationship to the CAIS interface. Finally, the special importance of the interactive debugger within such an environment is presented.

  1. Effects of a Video Model to Teach Students with Moderate Intellectual Disability to Use Key Features of an iPhone

    Science.gov (United States)

    Walser, Kathryn; Ayres, Kevin; Foote, Erika

    2012-01-01

    This study evaluated the effects of video modeling on teaching three high school students with moderate intellectual disability to perform three activities on an iPhone 3GS. This study is a replication and extension of the Hammond, Whatley, Ayres, and Gast (2010) study in which researchers taught this same set of skills using a slightly different…

  2. Puzzling features of EPMA radial fission gas release profiles: The key to realistic modeling of fission gas release up to ultra high burnup

    International Nuclear Information System (INIS)

    Sontheimer, F.; Landskron, H.

    1999-01-01

    Radial matrix fission gas release (fgr) profiles of UO 2 fuel measured by electron probe micro analysis usually have the shape of a bowler hat: High release in the fuel central part, low release in the rim and a continuous transition zone in between; this holds for both steady state irradiated fuel and ramped fuel. Good fission gas release models based mainly on diffusional processes are capable of describing such radial fgr profiles with the shape of a bowler. Occasionally, the bowler becomes battered: The formerly smooth transition zone between rim and center has pronounced steps and the height of the bowler increases (continued fgr in central part) despite decreasing temperatures at high burnup. Additionally, the rim of the bowler swings up at high burnup due to the rim effect which transports gas from the matrix to the rim bubbles. Standard diffusional fgr models are unable to describe 'battered bowlers' and especially the steps in the transition zone, which also show up in the etched cross-sections of the fuel as dark double rings or even multiple rings instead of the usual single dark ring, still await theoretical explanation. For the rim, it is meanwhile well known, that saturation processes are responsible for the redistribution of the fission gas from the matrix to the rim bubbles; empirical models as for example published by Lassmann from ITU/Karlsruhe do a good job in this regard. In this paper, it is shown that saturation processes are also responsible for the steps in the. transition zone sometimes seen in radial matrix fission gas release profiles of both steady state irradiated and ramped UO 2 fuel rods. Also the steadily increasing height of the bowler at high burnups of steady state irradiated rods, where temperatures fell so low that diffusional fission gas release in the central parts of the fuel stopped long before end of irradiation, is due to such saturation processes. These saturation processes are modeled with a concept based on Lassmann

  3. Towards mastering CRISPR-induced gene knock-in in plants: Survey of key features and focus on the model Physcomitrella patens.

    Science.gov (United States)

    Collonnier, Cécile; Guyon-Debast, Anouchka; Maclot, François; Mara, Kostlend; Charlot, Florence; Nogué, Fabien

    2017-05-15

    Beyond its predominant role in human and animal therapy, the CRISPR-Cas9 system has also become an essential tool for plant research and plant breeding. Agronomic applications rely on the mastery of gene inactivation and gene modification. However, if the knock-out of genes by non-homologous end-joining (NHEJ)-mediated repair of the targeted double-strand breaks (DSBs) induced by the CRISPR-Cas9 system is rather well mastered, the knock-in of genes by homology-driven repair or end-joining remains difficult to perform efficiently in higher plants. In this review, we describe the different approaches that can be tested to improve the efficiency of CRISPR-induced gene modification in plants, which include the use of optimal transformation and regeneration protocols, the design of appropriate guide RNAs and donor templates and the choice of nucleases and means of delivery. We also present what can be done to orient DNA repair pathways in the target cells, and we show how the moss Physcomitrella patens can be used as a model plant to better understand what DNA repair mechanisms are involved, and how this knowledge could eventually be used to define more performant strategies of CRISPR-induced gene knock-in. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Genomic Feature Models

    DEFF Research Database (Denmark)

    Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun

    -additive genetic mechanisms. These modeling approaches have proven to be highly useful to determine population genetic parameters as well as prediction of genetic risk or value. We present a series of statistical modelling approaches that use prior biological information for evaluating the collective action......Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non...

  5. Cycling hypoxia: A key feature of the tumor microenvironment.

    Science.gov (United States)

    Michiels, Carine; Tellier, Céline; Feron, Olivier

    2016-08-01

    A compelling body of evidence indicates that most human solid tumors contain hypoxic areas. Hypoxia is the consequence not only of the chaotic proliferation of cancer cells that places them at distance from the nearest capillary but also of the abnormal structure of the new vasculature network resulting in transient blood flow. Hence two types of hypoxia are observed in tumors: chronic and cycling (intermittent) hypoxia. Most of the current work aims at understanding the role of chronic hypoxia in tumor growth, response to treatment and metastasis. Only recently, cycling hypoxia, with spatial and temporal fluctuations in oxygen levels, has emerged as another key feature of the tumor environment that triggers different responses in comparison to chronic hypoxia. Either type of hypoxia is associated with distinct effects not only in cancer cells but also in stromal cells. In particular, cycling hypoxia has been demonstrated to favor, to a higher extent than chronic hypoxia, angiogenesis, resistance to anti-cancer treatments, intratumoral inflammation and tumor metastasis. These review details these effects as well as the signaling pathway it triggers to switch on specific transcriptomic programs. Understanding the signaling pathways through which cycling hypoxia induces these processes that support the development of an aggressive cancer could convey to the emergence of promising new cancer treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The idiopathic interstitial pneumonias: understanding key radiological features

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, S. [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom); Benamore, R., E-mail: Rachel.Benamore@orh.nhs.u [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom)

    2010-10-15

    Many radiologists find it challenging to distinguish between the different interstitial idiopathic pneumonias (IIPs). The British Thoracic Society guidelines on interstitial lung disease (2008) recommend the formation of multidisciplinary meetings, with diagnoses made by combined radiological, pathological, and clinical findings. This review focuses on understanding typical and atypical radiological features on high-resolution computed tomography between the different IIPs, to help the radiologist determine when a confident diagnosis can be made and how to deal with uncertainty.

  7. Key features and progress of the KSTAR tokamak engineering

    International Nuclear Information System (INIS)

    Bak, J.S.; Choi, C.H.; Oh, Y.K.

    2003-01-01

    Substantial progress of the KSTAR tokamak engineering has been made on major tokamak structures, superconducting magnets, in-vessel components, diagnostic system, heating system, and power supplies. The engineering design has been elaborated to the extent necessary to allow a realistic assessment of its feasibility, performance, and cost. The prototype fabrication has been carried out to establish the reliable fabrication technologies and to confirm the validation of analyses employed for the KSTAR design. The completion of experimental building with beneficial occupancy for machine assembly was accomplished in Sep. 2002. The construction of special utility such as cryo-plant, de-ionized water-cooling system, and main power station will begin upon completion of building construction. The commissioning, construction, fabrication, and assembly of the whole facility will be going on by the end of 2005. This paper describes the main design features and engineering progress of the KSTAR tokamak, and elaborates the work currently underway. (author)

  8. Key features of Ebola hemorrhagic fever: a review

    Directory of Open Access Journals (Sweden)

    Zulane Lima Sousa

    2014-11-01

    Full Text Available The current outbreak of Ebola virus in West Africa has become a devastating problem, with a mortality rate around 51%; over 3 132 deaths have been confirmed and even more are expected in this case. The virus causes a characteristic disease known as hemorrhagic fever. Its symptoms range from nonspecific signs such as fever, to more specific problems such as serious bleeding. Transmission occurs easily when a person comes in contact with contaminated fluids. Treatment is supportive because there are still no specific drugs for use. The present review focuses on the main features related to the Ebola virus, its transmission, pathogenesis, treatment and control forms. There is little in-depth knowledge about this disease, but its severity requires attention and information to prevent a worse scenario than the current.

  9. Key Features of Intertidal Food Webs That Support Migratory Shorebirds

    Science.gov (United States)

    Saint-Béat, Blanche; Dupuy, Christine; Bocher, Pierrick; Chalumeau, Julien; De Crignis, Margot; Fontaine, Camille; Guizien, Katell; Lavaud, Johann; Lefebvre, Sébastien; Montanié, Hélène; Mouget, Jean-Luc; Orvain, Francis; Pascal, Pierre-Yves; Quaintenne, Gwenaël; Radenac, Gilles; Richard, Pierre; Robin, Frédéric; Vézina, Alain F.; Niquil, Nathalie

    2013-01-01

    The migratory shorebirds of the East Atlantic flyway land in huge numbers during a migratory stopover or wintering on the French Atlantic coast. The Brouage bare mudflat (Marennes-Oléron Bay, NE Atlantic) is one of the major stopover sites in France. The particular structure and function of a food web affects the efficiency of carbon transfer. The structure and functioning of the Brouage food web is crucial for the conservation of species landing within this area because it provides sufficient food, which allows shorebirds to reach the north of Europe where they nest. The aim of this study was to describe and understand which food web characteristics support nutritional needs of birds. Two food-web models were constructed, based on in situ measurements that were made in February 2008 (the presence of birds) and July 2008 (absence of birds). To complete the models, allometric relationships and additional data from the literature were used. The missing flow values of the food web models were estimated by Monte Carlo Markov Chain – Linear Inverse Modelling. The flow solutions obtained were used to calculate the ecological network analysis indices, which estimate the emergent properties of the functioning of a food-web. The total activities of the Brouage ecosystem in February and July are significantly different. The specialisation of the trophic links within the ecosystem does not appear to differ between the two models. In spite of a large export of carbon from the primary producer and detritus in winter, the higher recycling leads to a similar retention of carbon for the two seasons. It can be concluded that in February, the higher activity of the ecosystem coupled with a higher cycling and a mean internal organization, ensure the sufficient feeding of the migratory shorebirds. PMID:24204666

  10. Key features of intertidal food webs that support migratory shorebirds.

    Directory of Open Access Journals (Sweden)

    Blanche Saint-Béat

    Full Text Available The migratory shorebirds of the East Atlantic flyway land in huge numbers during a migratory stopover or wintering on the French Atlantic coast. The Brouage bare mudflat (Marennes-Oléron Bay, NE Atlantic is one of the major stopover sites in France. The particular structure and function of a food web affects the efficiency of carbon transfer. The structure and functioning of the Brouage food web is crucial for the conservation of species landing within this area because it provides sufficient food, which allows shorebirds to reach the north of Europe where they nest. The aim of this study was to describe and understand which food web characteristics support nutritional needs of birds. Two food-web models were constructed, based on in situ measurements that were made in February 2008 (the presence of birds and July 2008 (absence of birds. To complete the models, allometric relationships and additional data from the literature were used. The missing flow values of the food web models were estimated by Monte Carlo Markov Chain--Linear Inverse Modelling. The flow solutions obtained were used to calculate the ecological network analysis indices, which estimate the emergent properties of the functioning of a food-web. The total activities of the Brouage ecosystem in February and July are significantly different. The specialisation of the trophic links within the ecosystem does not appear to differ between the two models. In spite of a large export of carbon from the primary producer and detritus in winter, the higher recycling leads to a similar retention of carbon for the two seasons. It can be concluded that in February, the higher activity of the ecosystem coupled with a higher cycling and a mean internal organization, ensure the sufficient feeding of the migratory shorebirds.

  11. Key Features of Political Advertising as an Independent Type of Advertising Communication

    OpenAIRE

    Svetlana Anatolyevna Chubay

    2015-01-01

    To obtain the most complete understanding of the features of political advertising, the author characterizes its specific features allocated by modern researchers. The problem of defining the notion of political advertising is studied in detail. The analysis of definitions available in professional literature has allowed the author to identify a number of key features that characterize political advertising as an independent type of promotional activity. These features include belonging to th...

  12. Soil fauna: key to new carbon models

    Science.gov (United States)

    Filser, Juliane; Faber, Jack H.; Tiunov, Alexei V.; Brussaard, Lijbert; Frouz, Jan; De Deyn, Gerlinde; Uvarov, Alexei V.; Berg, Matty P.; Lavelle, Patrick; Loreau, Michel; Wall, Diana H.; Querner, Pascal; Eijsackers, Herman; José Jiménez, Juan

    2016-11-01

    Soil organic matter (SOM) is key to maintaining soil fertility, mitigating climate change, combatting land degradation, and conserving above- and below-ground biodiversity and associated soil processes and ecosystem services. In order to derive management options for maintaining these essential services provided by soils, policy makers depend on robust, predictive models identifying key drivers of SOM dynamics. Existing SOM models and suggested guidelines for future SOM modelling are defined mostly in terms of plant residue quality and input and microbial decomposition, overlooking the significant regulation provided by soil fauna. The fauna controls almost any aspect of organic matter turnover, foremost by regulating the activity and functional composition of soil microorganisms and their physical-chemical connectivity with soil organic matter. We demonstrate a very strong impact of soil animals on carbon turnover, increasing or decreasing it by several dozen percent, sometimes even turning C sinks into C sources or vice versa. This is demonstrated not only for earthworms and other larger invertebrates but also for smaller fauna such as Collembola. We suggest that inclusion of soil animal activities (plant residue consumption and bioturbation altering the formation, depth, hydraulic properties and physical heterogeneity of soils) can fundamentally affect the predictive outcome of SOM models. Understanding direct and indirect impacts of soil fauna on nutrient availability, carbon sequestration, greenhouse gas emissions and plant growth is key to the understanding of SOM dynamics in the context of global carbon cycling models. We argue that explicit consideration of soil fauna is essential to make realistic modelling predictions on SOM dynamics and to detect expected non-linear responses of SOM dynamics to global change. We present a decision framework, to be further developed through the activities of KEYSOM, a European COST Action, for when mechanistic SOM models

  13. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  14. Key features of an EU health information system: a concept mapping study.

    Science.gov (United States)

    Rosenkötter, Nicole; Achterberg, Peter W; van Bon-Martens, Marja J H; Michelsen, Kai; van Oers, Hans A M; Brand, Helmut

    2016-02-01

    Despite the acknowledged value of an EU health information system (EU-HISys) and the many achievements in this field, the landscape is still heavily fragmented and incomplete. Through a systematic analysis of the opinions and valuations of public health stakeholders, this study aims to conceptualize key features of an EU-HISys. Public health professionals and policymakers were invited to participate in a concept mapping procedure. First, participants (N = 34) formulated statements that reflected their vision of an EU-HISys. Second, participants (N = 28) rated the relative importance of each statement and grouped conceptually similar ones. Principal Component and cluster analyses were used to condense these results to EU-HISys key features in a concept map. The number of key features and the labelling of the concept map were determined by expert consensus. The concept map contains 10 key features that summarize 93 statements. The map consists of a horizontal axis that represents the relevance of an 'organizational strategy', which deals with the 'efforts' to design and develop an EU-HISys and the 'achievements' gained by a functioning EU-HISys. The vertical axis represents the 'professional orientation' of the EU-HISys, ranging from the 'scientific' through to the 'policy' perspective. The top ranking statement expressed the need to establish a system that is permanent and sustainable. The top ranking key feature focuses on data and information quality. This study provides insights into key features of an EU-HISys. The results can be used to guide future planning and to support the development of a health information system for Europe. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  15. Salient Key Features of Actual English Instructional Practices in Saudi Arabia

    Science.gov (United States)

    Al-Seghayer, Khalid

    2015-01-01

    This is a comprehensive review of the salient key features of the actual English instructional practices in Saudi Arabia. The goal of this work is to gain insights into the practices and pedagogic approaches to English as a foreign language (EFL) teaching currently employed in this country. In particular, we identify the following central features…

  16. Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis

    Science.gov (United States)

    Kerr, Deirdre; Chung, Gregory K. W. K.

    2012-01-01

    The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…

  17. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  18. Cardiac masses, part 2: key imaging features for diagnosis and surgical planning.

    Science.gov (United States)

    Buckley, Orla; Madan, Rachna; Kwong, Raymond; Rybicki, Frank J; Hunsaker, Andetta

    2011-11-01

    The objectives of this article are to discuss key radiologic features that differentiate primary and secondary cardiac masses. Clinical scenarios are included to highlight stepwise radiologic workup of tumors of the pericardium, epicardium, myocardium, valves, and chambers. The involvement of key cardiac anatomic structures will also be emphasized to determine resectability and guide surgical planning. Multimodality imaging plays a pivotal role in diagnosis and surgical planning of cardiac masses. Clinical features, such as patient age, location, and imaging characteristics of the mass will determine the likely differential diagnosis. In addition to radiologic evaluation of the mass itself, involvement of valvular apparatus, extent of myocardial involvement, or presence of associated coronary artery involvement is necessary to determine resectability and surgical technique.

  19. The Research of the Facial Expression Recognition Method for Human-Computer Interaction Based on the Gabor Features of the Key Regions

    Directory of Open Access Journals (Sweden)

    Zhan Qun

    2014-08-01

    Full Text Available According to the fact that the Gabor features of the global face image are interfered easily, the method of facial expression recognition based on the Gabor transforming to the key area of the human face image was discussed. The face features location was achieved by the active shape model and the Gabor features of the local area of the key points relation to expression was extracted. On the basis, the PCA was utilized to realize dimensional reduction of the Gabor features. On the end, the facial expression recognition was realized based on the support vector machine. Compared with the global face image Gabor features, experimental results demonstrate that Gabor features of the key area of human face image can increase the accuracy of the facial expression recognition effectively.

  20. Discrete Feature Model (DFM) User Documentation

    International Nuclear Information System (INIS)

    Geier, Joel

    2008-06-01

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this software, the

  1. Discrete Feature Model (DFM) User Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2008-06-15

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this

  2. Puzzling features of EPMA radial fission gas release profiles: The key to realistic modelling of fission gas release up to ultra high burnup of 100 MWD/kg M with CARO-E

    International Nuclear Information System (INIS)

    Sontheimer, F.; Landskron, H.

    2001-01-01

    Radial matrix fission gas release (FGR) profiles of UO 2 fuel measured by electron probe micro analysis usually have the shape of a bowler hat: High release in the fuel central part, low release in the rim and a continuous transition zone in between; this holds for both steady state irradiated fuel and ramped fuel. Good fission gas release models based mainly on diffusional processes are capable of describing such radial FGR profiles with the shape of a bowler. Occasionally, the bowler becomes battered: The formerly smooth transition zone between rim and center has pronounced steps and the height and width of the bowler increase (continued FGR in central part) despite decreasing temperatures at high burnup. Additionally, the rim of the bowler swings up at high burnup due to the rim effect which transports gas from the matrix to the rim bubbles. Standard diffusional FGR models are unable to describe 'battered bowlers' and especially the steps in the transition zone, which also show up in the etched cross-sections of the fuel as dark double rings or even multiple rings instead of the usual single dark ring, still await theoretical explanation. For the rim, it is meanwhile well known that saturation processes are responsible for the redistribution of the fission gas from the matrix to the rim bubbles; empirical models, as for example published by Lassmann from ITU/Karlsruhe, do a good job in this regard. In this paper, it is shown that saturation processes are also responsible for the steps in the transition zone sometimes seen in radial matrix fission gas release profiles of both steady state irradiated and ramped UO 2 fuel rods. Also, the steadily increasing height and width of the bowler at high burnups of steady state irradiated rods where temperatures fell so low that diffusional fission gas release in the central parts of the fuel stopped long before end of irradiation, is due to such saturation processes. These saturation processes are modeled with a concept

  3. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  4. Soil fauna: key to new carbon models

    NARCIS (Netherlands)

    Filser, Juliane; Faber, J.H.; Tiunov, Alexei V.; Brussaard, L.; Frouz, J.; Deyn, de G.B.; Uvarov, Alexei V.; Berg, Matty P.; Lavelle, Patrick; Loreau, M.; Wall, D.H.; Querner, Pascal; Eijsackers, Herman; Jimenez, Juan Jose

    2016-01-01

    Soil organic matter (SOM) is key to maintaining soil fertility, mitigating climate change, combatting land degradation, and conserving above- and below-ground biodiversity and associated soil processes and ecosystem services. In order to derive management options for maintaining these essential

  5. Key Lake, a model of Canadian development

    International Nuclear Information System (INIS)

    Runnalls, O.J.C.

    1987-01-01

    Canada ranks among the world's top four countries in terms of measured, indicated, and inferred uranium resources. Since 1984, Canada has been the world's largest uranium producer providing some 30% of the world's total. An important reason for this strong position is related to the discovery of high-grade near-surface uranium deposits in northern Saskatchewan in 1968 and subsequently. The history of the discovery of one such deposit near Key Lake made by the German-controlled Uranerz Exploration and Mining Limited is recounted briefly. The Key Lake mine became operational in 1983 and currently is the largest uranium-producing facility in the world. At present, less than 20% of the country's annual uranium output of approximately 11,000 tonnes U is required to provide fuel for the domestic nuclear power program. The excess, more than 9000 tonnes U annually, is planned to be exported abroad, primarily to customers in Western Europe, Eastern Asia and the United States. Given its strong resource base, large-scale exports from Canada should continue well into the next century. (orig.) [de

  6. Use of key feature questions in summative assessment of veterinary medicine students.

    Science.gov (United States)

    Schaper, Elisabeth; Tipold, Andrea; Ehlers, Jan P

    2013-03-07

    To prove the hypothesis that procedural knowledge might be tested using Key Feature (KF) questions in written exams, the University of Veterinary Medicine Hannover Foundation (TiHo) pioneered this format in summative assessment of veterinary medicine students. Exams in veterinary medicine are either tested orally, practically, in written form or digitally in written form. The only question formats which were previously used in the written e-exams were Type A Single-choice Questions, Image Analysis and Short Answer Questions. E-exams are held at the TiHo using the electronic exam system Q [kju:] by CODIPLAN GmbH. In order to examine less factual knowledge and more procedural knowledge and thus the decision-making skills of the students, a new question format was integrated into the exam regulations by the TiHo and some examiner used this for the first time in the computer based assessment. Following a successful pilot phase in formative e-exams for students, KF questions were also introduced in summative exams. A number of multiple choice questions were replaced by KF questions in four computer based assessment in veterinary medicine. The subjects were internal medicine, surgery, reproductive medicine and dairy science. The integration and linking of KF questions into the computer based assessment system Q [kju:] went without any complications. The new question format was well received both by the students and the teaching staff who formulated the questions. The hypothesis could be proven that Key Feature questions represent a practicable addition to the existing e-exam question formats for testing procedural knowledge. The number of KF questions will be therefore further increased in examinations in veterinary medicine at the TiHo.

  7. Key-feature questions for assessment of clinical reasoning: a literature review.

    Science.gov (United States)

    Hrynchak, Patricia; Takahashi, Susan Glover; Nayer, Marla

    2014-09-01

    Key-feature questions (KFQs) have been developed to assess clinical reasoning skills. The purpose of this paper is to review the published evidence on the reliability and validity of KFQs to assess clinical reasoning. A literature review was conducted by searching MEDLINE (1946-2012) and EMBASE (1980-2012) via OVID and ERIC. The following search terms were used: key feature; question or test or tests or testing or tested or exam; assess or evaluation, and case-based or case-specific. Articles not in English were eliminated. The literature search resulted in 560 articles. Duplicates were eliminated, as were articles that were not relevant; nine articles that contained reliability or validity data remained. A review of the references and of citations of these articles resulted in an additional 12 articles to give a total of 21 for this review. Format, language and scoring of KFQ examinations have been studied and modified to maximise reliability. Internal consistency reliability has been reported as being between 0.49 and 0.95. Face and content validity have been shown to be moderate to high. Construct validity has been shown to be good using vector thinking processes and novice versus expert paradigms, and to discriminate between teaching methods. The very modest correlations between KFQ examinations and more general knowledge-based examinations point to differing roles for each. Importantly, the results of KFQ examinations have been shown to successfully predict future physician performance, including patient outcomes. Although it is inaccurate to conclude that any testing format is universally reliable or valid, published research supports the use of examinations using KFQs to assess clinical reasoning. The review identifies areas of further study, including all categories of evidence. Investigation into how examinations using KFQs integrate with other methods in a system of assessment is needed. © 2014 John Wiley & Sons Ltd.

  8. Safety analysis for key design features of KALIMER-600 design concept

    International Nuclear Information System (INIS)

    Lee, Yong-Bum; Kwon, Y. M.; Kim, E. K.; Suk, S. D.; Chang, W. P.; Joeng, H. Y.; Ha, K. S.; Heo, S.

    2005-03-01

    KAERI is developing the conceptual design of a Liquid Metal Reactor, KALIMER-600 (Korea Advanced LIquid MEtal Reactor) under the Long-term Nuclear R and D Program. KALIMER-600 addresses key issues regarding future nuclear power plants such as plant safety, economics, proliferation, and waste. In this report, key safety design features are described and safety analyses results for typical ATWS accidents, containment design basis accidents, and flow blockages in the KALIMER design are presented. First, the basic approach to achieve the safety goal and main design features of KALIMER-600 are introduced in Chapter 1, and the event categorization and acceptance criteria for the KALIMER-600 safety analysis are described in Chapter 2, In Chapter 3, results of inherent safety evaluations for the KALIMER-600 conceptual design are presented. The KALIMER-600 core and plant system are designed to assure benign performance during a selected set of events without either reactor control or protection system intervention. Safety analyses for the postulated anticipated transient without scram (ATWS) have been performed using the SSC-K code to investigate the KALIMER-600 system response to the events. The objectives of Chapter 4, are to assess the response of KALIMER-600 containment to the design basis accidents and to evaluate whether the consequences are acceptable or not in the aspect of structural integrity and the exposure dose rate. In Chapter 5, the analysis of flow blockage for KALIMER-600 with the MATRA-LMR-FB code, which has been developed for the internal flow blockage in a LMR subassembly, are described. The cases with a blockage of 6-subchannel, 24-subchannel, and 54-subchannel are analyzed

  9. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  10. Key West, Florida Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Key West, Florida Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST...

  11. Feature and Meta-Models in Clafer: Mixed, Specialized, and Coupled

    DEFF Research Database (Denmark)

    Bąk, Kacper; Czarnecki, Krzysztof; Wasowski, Andrzej

    2011-01-01

    constraints (such as mapping feature configurations to component configurations or model templates). Clafer also allows arranging models into multiple specialization and extension layers via constraints and inheritance. We identify four key mechanisms allowing a meta-modeling language to express feature...

  12. Key Features of Academic Detailing: Development of an Expert Consensus Using the Delphi Method.

    Science.gov (United States)

    Yeh, James S; Van Hoof, Thomas J; Fischer, Michael A

    2016-02-01

    Academic detailing is an outreach education technique that combines the direct social marketing traditionally used by pharmaceutical representatives with unbiased content summarizing the best evidence for a given clinical issue. Academic detailing is conducted with clinicians to encourage evidence-based practice in order to improve the quality of care and patient outcomes. The adoption of academic detailing has increased substantially since the original studies in the 1980s. However, the lack of standard agreement on its implementation makes the evaluation of academic detailing outcomes challenging. To identify consensus on the key elements of academic detailing among a group of experts with varying experiences in academic detailing. This study is based on an online survey of 20 experts with experience in academic detailing. We used the Delphi process, an iterative and systematic method of developing consensus within a group. We conducted 3 rounds of online surveys, which addressed 72 individual items derived from a previous literature review of 5 features of academic detailing, including (1) content, (2) communication process, (3) clinicians targeted, (4) change agents delivering intervention, and (5) context for intervention. Nonrespondents were removed from later rounds of the surveys. For most questions, a 4-point ordinal scale was used for responses. We defined consensus agreement as 70% of respondents for a single rating category or 80% for dichotomized ratings. The overall survey response rate was 95% (54 of 57 surveys) and nearly 92% consensus agreement on the survey items (66 of 72 items) by the end of the Delphi exercise. The experts' responses suggested that (1) focused clinician education offering support for clinical decision-making is a key component of academic detailing, (2) detailing messages need to be tailored and provide feasible strategies and solutions to challenging cases, and (3) academic detailers need to develop specific skill sets

  13. Chimeric Mice with Competent Hematopoietic Immunity Reproduce Key Features of Severe Lassa Fever.

    Directory of Open Access Journals (Sweden)

    Lisa Oestereich

    2016-05-01

    Full Text Available Lassa fever (LASF is a highly severe viral syndrome endemic to West African countries. Despite the annual high morbidity and mortality caused by LASF, very little is known about the pathophysiology of the disease. Basic research on LASF has been precluded due to the lack of relevant small animal models that reproduce the human disease. Immunocompetent laboratory mice are resistant to infection with Lassa virus (LASV and, to date, only immunodeficient mice, or mice expressing human HLA, have shown some degree of susceptibility to experimental infection. Here, transplantation of wild-type bone marrow cells into irradiated type I interferon receptor knockout mice (IFNAR-/- was used to generate chimeric mice that reproduced important features of severe LASF in humans. This included high lethality, liver damage, vascular leakage and systemic virus dissemination. In addition, this model indicated that T cell-mediated immunopathology was an important component of LASF pathogenesis that was directly correlated with vascular leakage. Our strategy allows easy generation of a suitable small animal model to test new vaccines and antivirals and to dissect the basic components of LASF pathophysiology.

  14. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  15. Feature-driven model-based segmentation

    Science.gov (United States)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  16. Numerical rigid plastic modelling of shear capacity of keyed joints

    DEFF Research Database (Denmark)

    Herfelt, Morten Andersen; Poulsen, Peter Noe; Hoang, Linh Cao

    2015-01-01

    Keyed shear joints are currently designed using simple and conservative design formulas, yet these formulas do not take the local mechanisms in the concrete core of the joint into account. To investigate this phenomenon a rigid, perfectly plastic finite element model of keyed joints is used...

  17. Key synoptic-scale features influencing the high-impact heavy rainfall in Beijing, China, on 21 July 2012

    Directory of Open Access Journals (Sweden)

    Huizhen Yu

    2016-06-01

    Full Text Available This work examined quantitatively the key synoptic features influencing the high-impact heavy rainfall event in Beijing, China, on 21 July 2012 using both correlation analysis based on global ensemble forecasts (from TIGGE and a method previously used for observation targeting. The global models were able to capture the domain-averaged rainfall of >100 mm but underestimated rainfall beyond 200 mm with an apparent time lag. In this particular case, the ensemble forecasts of the National Centres for Environmental Prediction (NCEP had apparently better performance than those of the European Centre for Medium-Range Weather Forecasts (ECMWF and the China Meteorological Administration (CMA, likely because of their high accuracies in capturing the key synoptic features influencing the rainfall event. Linear correlation coefficients between the 24-h domain-averaged precipitation in Beijing and various variables during the rainfall were calculated based on the grand ensemble forecasts from ECMWF, NCEP and CMA. The results showed that the distribution of the precipitation was associated with the strength and the location of a mid-level trough in the westerly flow and the associated low-level low. The dominant system was the low-level low, and a stronger low with a location closer to the Beijing area was associated with heavier rainfall, likely caused by stronger low-level lifting. These relationships can be clearly seen by comparing a good member with a bad member of the grand ensemble. The importance of the trough in the westerly flow and the low-level low was further confirmed by the sensitive area identified through sensitivity analyses with conditional nonlinear optimal perturbation method.

  18. E-referral Solutions: Successful Experiences, Key Features and Challenges- a Systematic Review.

    Science.gov (United States)

    Naseriasl, Mansour; Adham, Davoud; Janati, Ali

    2015-06-01

    around the world health systems constantly face increasing pressures which arise from many factors, such as an ageing population, patients and providers demands for equipment's and services. In order to respond these challenges and reduction of health system's transactional costs, referral solutions are considered as a key factor. This study was carried out to identify referral solutions that have had successes. relevant studies identified using keywords of referrals, consultation, referral system, referral model, referral project, electronic referral, electronic booking, health system, healthcare, health service and medical care. These searches were conducted using PubMed, ProQuest, Google Scholar, Scopus, Emerald, Web of Knowledge, Springer, Science direct, Mosby's index, SID, Medlib and Iran Doc data bases. 4306 initial articles were obtained and refined step by step. Finally, 27 articles met the inclusion criteria. we identified seventeen e-referral systems developed in UK, Norway, Finland, Netherlands, Denmark, Scotland, New Zealand, Canada, Australia, and U.S. Implemented solutions had variant degrees of successes such as improved access to specialist care, reduced wait times, timeliness and quality of referral communication, accurate health information transfer and integration of health centers and services. each one of referral solutions has both positive and changeable aspects that should be addressed according to sociotechnical conditions. These solutions are mainly formed in a small and localized manner.

  19. Practical Implementation of Various Public Key Infrastructure Models

    Directory of Open Access Journals (Sweden)

    Dmitriy Anatolievich Melnikov

    2016-03-01

    Full Text Available The paper proposes a short comparative analysis of the contemporary models of public key infrastructure (PKI and the issues of the PKI models real implementation. The Russian model of PKI is presented. Differences between the North American and West Europe models of PKI and Russian model of PKI are described. The problems of creation and main directions of further development and improvement of the Russian PKI and its integration into the global trust environment are defined.

  20. Improving Latino Children's Early Language and Literacy Development: Key Features of Early Childhood Education within Family Literacy Programmes

    Science.gov (United States)

    Jung, Youngok; Zuniga, Stephen; Howes, Carollee; Jeon, Hyun-Joo; Parrish, Deborah; Quick, Heather; Manship, Karen; Hauser, Alison

    2016-01-01

    Noting the lack of research on how early childhood education (ECE) programmes within family literacy programmes influence Latino children's early language and literacy development, this study examined key features of ECE programmes, specifically teacher-child interactions and child engagement in language and literacy activities and how these…

  1. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  2. Model of key success factors for Business Intelligence implementation

    Directory of Open Access Journals (Sweden)

    Peter Mesaros

    2016-07-01

    Full Text Available New progressive technologies recorded growth in every area. Information-communication technologies facilitate the exchange of information and it facilitates management of everyday activities in enterprises. Specific modules (such as Business Intelligence facilitate decision-making. Several studies have demonstrated the positive impact of Business Intelligence to decision-making. The first step is to put in place the enterprise. The implementation process is influenced by many factors. This article discusses the issue of key success factors affecting to successful implementation of Business Intelligence. The article describes the key success factors for successful implementation and use of Business Intelligence based on multiple studies. The main objective of this study is to verify the effects and dependence of selected factors and proposes a model of key success factors for successful implementation of Business Intelligence. Key success factors and the proposed model are studied in Slovak enterprises.

  3. Enhancing Critical Infrastructure and Key Resources (CIKR) Level-0 Physical Process Security Using Field Device Distinct Native Attribute Features

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, Juan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Liefer, Nathan C. [Wright-Patterson AFB, Dayton, OH (United States); Busho, Colin R. [Wright-Patterson AFB, Dayton, OH (United States); Temple, Michael A. [Wright-Patterson AFB, Dayton, OH (United States)

    2017-12-04

    Here, the need for improved Critical Infrastructure and Key Resource (CIKR) security is unquestioned and there has been minimal emphasis on Level-0 (PHY Process) improvements. Wired Signal Distinct Native Attribute (WS-DNA) Fingerprinting is investigated here as a non-intrusive PHY-based security augmentation to support an envisioned layered security strategy. Results are based on experimental response collections from Highway Addressable Remote Transducer (HART) Differential Pressure Transmitter (DPT) devices from three manufacturers (Yokogawa, Honeywell, Endress+Hauer) installed in an automated process control system. Device discrimination is assessed using Time Domain (TD) and Slope-Based FSK (SB-FSK) fingerprints input to Multiple Discriminant Analysis, Maximum Likelihood (MDA/ML) and Random Forest (RndF) classifiers. For 12 different classes (two devices per manufacturer at two distinct set points), both classifiers performed reliably and achieved an arbitrary performance benchmark of average cross-class percent correct of %C > 90%. The least challenging cross-manufacturer results included near-perfect %C ≈ 100%, while the more challenging like-model (serial number) discrimination results included 90%< %C < 100%, with TD Fingerprinting marginally outperforming SB-FSK Fingerprinting; SB-FSK benefits from having less stringent response alignment and registration requirements. The RndF classifier was most beneficial and enabled reliable selection of dimensionally reduced fingerprint subsets that minimize data storage and computational requirements. The RndF selected feature sets contained 15% of the full-dimensional feature sets and only suffered a worst case %CΔ = 3% to 4% performance degradation.

  4. DISTANCE AS KEY FACTOR IN MODELLING STUDENTS’ RECRUITMENT BY UNIVERSITIES

    Directory of Open Access Journals (Sweden)

    SIMONA MĂLĂESCU

    2015-10-01

    Full Text Available Distance as Key Factor in Modelling Students’ Recruitment by Universities. In a previous paper analysing the challenge of keeping up with the current methodologies in the analysis and modelling of students’ recruitment by universities in the case of some ECE countries which still don’t register or develop key data to take advantage from the state of the art knowledge on the domain, we have promised to approach the factor distance in a future work due to the extent of the topic. This paper fulfill that promise bringing a review of the literature especially dealing with modelling the geographical area of recruiting students of an university, where combining distance with the proximate key factors previously reviewed, complete the meta-analysis of existing literature we have started a year ago. Beyond the theoretical benefit from a practical perspective, the metaanalysis aimed at synthesizing elements of good practice that can be applied to the local university system.

  5. Hypertension Is a Key Feature of the Metabolic Syndrome in Subjects Aging with HIV

    DEFF Research Database (Denmark)

    Martin-Iguacel, Raquel; Negredo, Eugènia; Peck, Robert

    2016-01-01

    to predispose to these metabolic complications and to the excess risk of CVD observed in the HIV population. The metabolic syndrome (MS) represents a clustering of RF for CVD that includes abdominal obesity, hypertension, dyslipidemia and insulin resistance. Hypertension is a prevalent feature of the MS in HIV...

  6. Modeling, Simulation and Analysis of Public Key Infrastructure

    Science.gov (United States)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  7. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  8. Cutaneous Manifestations in Dermatomyositis: Key Clinical and Serological Features-a Comprehensive Review.

    Science.gov (United States)

    Muro, Yoshinao; Sugiura, Kazumitsu; Akiyama, Masashi

    2016-12-01

    Dermatomyositis (DM) is a common idiopathic inflammatory myopathy. The pathogenesis is considered to be microangiopathy affecting skin and muscle. The cutaneous manifestations of DM are the most important aspect of this disease, and their correct evaluation is important for early diagnosis. The skin signs are various: Some are pathognomonic or highly characteristic, and others are compatible with DM. Recently, DM has been categorized into several disease subsets based on the various autoantibodies present in patients. Sometimes, characteristic cutaneous manifestations are strongly associated with the presence of specific autoantibodies. For example, anti-Mi-2 antibody is associated with the classic features of DM, including heliotrope rash, Gottron's papules, the V-neck sign, the shawl sign, cuticular overgrowth, and photosensitivity. Frequent cutaneous features in anti-transcriptional intermediary factor 1 gamma (TIF1γ)-positive patients are diffuse photoerythema, including "dusky red face," while skin ulcerations, palmar papules (inverse Gottron), diffuse hair loss, panniculitis, and oral pain and/or ulcers are sometimes associated with anti-melanoma differentiation-associated gene 5 product (MDA5) antibody. Here, we review important cutaneous manifestations seen in patients with DM, and we examine the relationship between the skin changes and myositis-associated autoantibodies. Correct evaluation of cutaneous manifestations and myositis-associated autoantibodies should help the clinician in the early diagnosis of DM, for a quick recognition of cutaneous signs that may be the symptom of onset before muscle inflammation.

  9. VISUAL CONCEPT LEARNING SYSTEM BASED ON LEXICAL ELEMENTS AND FEATURE KEY POINTS CONJUNCTION

    Directory of Open Access Journals (Sweden)

    V. I. Filatov

    2016-07-01

    Full Text Available Subject of Research. The paper deals withthe process of visual concept building based on two unlabeled sources of information (visual and textual. Method. Visual concept-based learning is carried out with image patterns and lexical elements simultaneous conjunction. Concept-based learning consists of two basic stages: early learning acquisition (primary learning and lexical-semantic learning (secondary learning. In early learning acquisition stage the visual concept dictionary is created providing background for the next stage. The lexical-semantic learning makes two sources timeline analysis and extracts features in both information channels. Feature vectors are formed by extraction of separated information units in both channels. Mutual information between two sources describes visual concepts building criteria. Main Results. Visual concept-based learning system has been developed; it uses video data with subtitles. The results of research have shown principal ability of visual concepts building by our system. Practical Relevance.Recommended application area of described system is an object detection, image retrieval and automatic building of visual concept-based data tasks.

  10. Key features of MIR.1200 (AES-2006) design and current stage of Leningrad NPP-2 construction

    International Nuclear Information System (INIS)

    Ivkov, Igor

    2010-01-01

    MIR.1200/AES-2006 is an abbreviated name of the evolving NPP design developed on the basis of the VVER-1000 Russian design with gross operation life of 480 reactor-years. This design is being implemented in four Units of Leningrad NPP-2 (LNPP-2. The AES-91/99 was used as reference during development of the AES-2006 design for LNPP-2; this design was implemented in two Units of Tianwan NPP (China). The main technical features of the MIR.1200/AES-2006 design include a double containment, four trains of active safety systems (4x100%, 4x50%), and special engineering measures for BDBA management (core catcher, H2 PARs, PHRS) based mainly on passive principles. The containment is described in detail, the main features in comparison with the reference NPP are outlined, the design layout principles are highlighted, the safety system structure and parameters are described. Attention is paid to the BDBA management system, hydrogen removal system, core catcher, and PHRS-SG and C-PHRS. (P.A.)

  11. Neuroticism in Young Women with Fibromyalgia Links to Key Clinical Features

    Directory of Open Access Journals (Sweden)

    Katrina Malin

    2012-01-01

    Full Text Available Objective. We examined personality traits in young women with FM, in order to seek associations with key psychological processes and clinical symptoms. Methods. Twenty-seven women with FM and 29 age-matched female healthy controls [HC] completed a series of questionnaires examining FM symptoms, personality and psychological variables. Results. Significant differences between characteristic FM symptoms (sleep, pain, fatigue, and confusion as well as for the psychological variables of depression, anxiety, and stress were found between FM and HC (P<0.001. Neuroticism was the only subscale of the Big Five Inventory that showed a significant difference between the FM group and HC group [P<0.05]. Within the FM group, there was a significant association between the level of the neuroticism and each of pain, sleep, fatigue, and confusion, depression, anxiety, and stress (P<0.05–0.01. The association between the level of neuroticism and the level of stress was the strongest of all variables tested (P<0.001. Conclusion. The personality trait of neuroticism significantly associates with the key FM characteristics of pain, sleep, fatigue and confusion as well as the common co-morbidities of depression, anxiety and stress. Personality appears to be an important modulator of FM clinical symptoms.

  12. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  13. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature......The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... person from another using only subsets of features will both decrease the computational cost and increase the generalization capacity of the face recognition algorithm. Moreover, identifying which are the features that better discriminate between persons will also provide a deeper understanding...

  14. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux......, and an initial experiment with sensory dissonance has been undertaken with good results....

  15. Feature recognition and clustering for urban modelling

    NARCIS (Netherlands)

    Chaszar, A.; Beirao, J.N.

    2013-01-01

    In urban planning exploration and analysis assist the generation, measurement, interpretation and management of the modelled urban environments. This frequently involves categorisation of model elements and identification of element types. Such designation of elements can be achieved through

  16. Childhood Ataxia: Clinical Features, Pathogenesis, Key Unanswered Questions, and Future Directions

    Science.gov (United States)

    Ashley, Claire N.; Hoang, Kelly D.; Lynch, David R.; Perlman, Susan L.; Maria, Bernard L.

    2013-01-01

    Childhood ataxia is characterized by impaired balance and coordination primarily due to cerebellar dysfunction. Friedreich ataxia, a form of childhood ataxia, is the most common multisystem autosomal recessive disease. Most of these patients are homozygous for the GAA repeat expansion located on the first intron of the frataxin gene on chromosome 9. Mutations in the frataxin gene impair mitochondrial function, increase reactive oxygen species, and trigger redistribution of iron in the mitochondria and cytosol. Targeted therapies for Friedreich ataxia are undergoing testing. In addition, a centralized database, patient registry, and natural history study have been launched to support clinical trials in Friedreich ataxia. The 2011 Neurobiology of Disease in Children symposium, held in conjunction with the 40th annual Child Neurology Society meeting, aimed to (1) describe clinical features surrounding Friedreich ataxia, including cardiomyopathy and genetics; (2) discuss recent advances in the understanding of the pathogenesis of Friedreich ataxia and developments of clinical trials; (3) review new investigations of characteristic symptoms; (4) establish clinical and biochemical overlaps in neurodegenerative diseases and possible directions for future basic, translational, and clinical studies. PMID:22859693

  17. KEY FEATURES OF THE INTRAGRAFT MICROENVIRONMENT THAT DETERMINE LONG-TERM SURVIVAL FOLLOWING TRANSPLANTATION

    Directory of Open Access Journals (Sweden)

    Sarah eBruneau

    2012-04-01

    Full Text Available In this review, we discuss how changes in the intragraft microenvironment serve to promote or sustain the development of chronic allograft rejection. We propose two key elements within the microenvironment that contribute to the rejection process. The first is endothelial cell proliferation and angiogenesis that serve to create abnormal microvascular blood flow patterns as well as local tissue hypoxia, and precedes endothelial-to-mesenchymal transition (EndMT. The second is the overexpression of local cytokines and growth factors that serve to sustain inflammation and, in turn, function to promote a leukocyte-induced angiogenesis reaction. Central to both events is overexpression of vascular endothelial growth factor (VEGF, which is both pro-inflammatory and pro-angiogenic, and thus drives progression of the chronic rejection microenvironment. In our discussion, we focus on how inflammation results in angiogenesis and how leukocyte-induced angiogenesis is pathological. We also discuss how VEGF is a master control factor that fosters the development of the chronic rejection microenvironment. Overall, this review provides insight into the intragraft microenvironment as an important paradigm for future direction in the field.

  18. Key metrics for HFIR HEU and LEU models

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Germina [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Betzler, Benjamin R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chandler, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Renfro, David G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Eva E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-25

    This report compares key metrics for two fuel design models of the High Flux Isotope Reactor (HFIR). The first model represents the highly enriched uranium (HEU) fuel currently in use at HFIR, and the second model considers a low-enriched uranium (LEU) interim design fuel. Except for the fuel region, the two models are consistent, and both include an experiment loading that is representative of HFIR's current operation. The considered key metrics are the neutron flux at the cold source moderator vessel, the mass of 252Cf produced in the flux trap target region as function of cycle time, the fast neutron flux at locations of interest for material irradiation experiments, and the reactor cycle length. These key metrics are a small subset of the overall HFIR performance and safety metrics. They were defined as a means of capturing data essential for HFIR's primary missions, for use in optimization studies assessing the impact of HFIR's conversion from HEU fuel to different types of LEU fuel designs.

  19. Music Genre Classification using the multivariate AR feature integration model

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Meng, Anders

    2005-01-01

    Music genre classification systems are normally build as a feature extraction module followed by a classifier. The features are often short-time features with time frames of 10-30ms, although several characteristics of music require larger time scales. Thus, larger time frames are needed to take...... informative decisions about musical genre. For the MIREX music genre contest several authors derive long time features based either on statistical moments and/or temporal structure in the short time features. In our contribution we model a segment (1.2 s) of short time features (texture) using a multivariate...

  20. A positive deviance approach to understanding key features to improving diabetes care in the medical home

    NARCIS (Netherlands)

    Gabbay, R.A.; Friedberg, M.W.; Miller-Day, M.; Cronholm, P.F.; Adelman, A.; Schneider, E.C.

    2013-01-01

    PURPOSE The medical home has gained national attention as a model to reorganize primary care to improve health outcomes. Pennsylvania has undertaken one of the largest state-based, multipayer medical home pilot projects. We used a positive deviance approach to identify and compare factors driving

  1. Predicting establishment of non-native fishes in Greece: identifying key features

    Directory of Open Access Journals (Sweden)

    Christos Gkenas

    2015-11-01

    Full Text Available Non-native fishes are known to cause economic damage to human society and are considered a major threat to biodiversity loss in freshwater ecosystems. The growing concern about these impacts has driven to an investigation of the biological traits that facilitate the establishment of non-native fish. However, invalid assessment in choosing the appropriate statistical model can lead researchers to ambiguous conclusions. Here, we present a comprehensive comparison of traditional and alternative statistical methods for predicting fish invasions using logistic regression, classification trees, multicorrespondence analysis and random forest analysis to determine characteristics of successful and failed non-native fishes in Hellenic Peninsula through establishment. We defined fifteen categorical predictor variables with biological relevance and measures of human interest. Our study showed that accuracy differed according to the model and the number of factors considered. Among all the models tested, random forest and logistic regression performed best, although all approaches predicted non-native fish establishment with moderate to excellent results. Detailed evaluation among the models corresponded with differences in variables importance, with three biological variables (parental care, distance from nearest native source and maximum size and two variables of human interest (prior invasion success and propagule pressure being important in predicting establishment. The analyzed statistical methods presented have a high predictive power and can be used as a risk assessment tool to prevent future freshwater fish invasions in this region with an imperiled fish fauna.

  2. Key features of mcr-1-bearing plasmids from Escherichia coli isolated from humans and food

    Directory of Open Access Journals (Sweden)

    Katrin Zurfluh

    2017-09-01

    Full Text Available Abstract Background Mcr-1-harboring Enterobacteriaceae are reported worldwide since their first discovery in 2015. However, a limited number of studies are available that compared full-length plasmid sequences of human and animal origins. Methods In this study, mcr-1-bearing plasmids from seven Escherichia coli isolates recovered from patients (n = 3, poultry meat (n = 2 and turkey meat (n = 2 in Switzerland were further analyzed and compared. Isolates were characterized by multilocus sequence typing (MLST. The mcr-1-bearing plasmids were transferred by transformation into reference strain E. coli DH5α and MCR-1-producing transformants were selected on LB-agar supplemented with 2 mg/L colistin. Purified plasmids were then sequenced and compared. Results MLST revealed six distinct STs, illustrating the high clonal diversity among mcr-1-positive E. coli isolates of different origins. Two different mcr-1-positive plasmids were identified from a single E. coli ST48 human isolate. All other isolates possessed a single mcr-1 harboring plasmid. Transferable IncI2 (size ca. 60–61 kb and IncX4 (size ca. 33–35 kb type plasmids each bearing mcr-1 were found associated with human and food isolates. None of the mcr-1-positive IncI2 and IncX4 plasmids possessed any additional resistance determinants. Surprisingly, all but one of the sequenced mcr-1-positive plasmids lacked the ISApl1 element, which is a key element mediating acquisition of mcr-1 into various plasmid backbones. Conclusions There is strong evidence that the food chain may be an important transmission route for mcr-1-bearing plasmids. Our data suggest that some “epidemic” plasmids rather than specific E. coli clones might be responsible for the spread of the mcr-1 gene along the food chain.

  3. Cytoplasmic CUG RNA foci are insufficient to elicit key DM1 features.

    Directory of Open Access Journals (Sweden)

    Warunee Dansithong

    Full Text Available The genetic basis of myotonic dystrophy type I (DM1 is the expansion of a CTG tract located in the 3' untranslated region of DMPK. Expression of mutant RNAs encoding expanded CUG repeats plays a central role in the development of cardiac disease in DM1. Expanded CUG tracts form both nuclear and cytoplasmic aggregates, yet the relative significance of such aggregates in eliciting DM1 pathology is unclear. To test the pathophysiology of CUG repeat encoding RNAs, we developed and analyzed mice with cardiac-specific expression of a beta-galactosidase cassette in which a (CTG(400 repeat tract was positioned 3' of the termination codon and 5' of the bovine growth hormone polyadenylation signal. In these animals CUG aggregates form exclusively in the cytoplasm of cardiac cells. A key pathological consequence of expanded CUG repeat RNA expression in DM1 is aberrant RNA splicing. Abnormal splicing results from the functional inactivation of MBNL1, which is hypothesized to occur due to MBNL1 sequestration in CUG foci or from elevated levels of CUG-BP1. We therefore tested the ability of cytoplasmic CUG foci to elicit these changes. Aggregation of CUG RNAs within the cytoplasm results both in Mbnl1 sequestration and in approximately a two fold increase in both nuclear and cytoplasmic Cug-bp1 levels. Significantly, despite these changes RNA splice defects were not observed and functional analysis revealed only subtle cardiac dysfunction, characterized by conduction defects that primarily manifest under anesthesia. Using a human myoblast culture system we show that this transgene, when expressed at similar levels to a second transgene, which encodes expanded CTG tracts and facilitates both nuclear focus formation and aberrant splicing, does not elicit aberrant splicing. Thus the lack of toxicity of cytoplasmic CUG foci does not appear to be a consequence of low expression levels. Our results therefore demonstrate that the cellular location of CUG RNA

  4. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  5. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  6. Inverse Bayesian inference as a key of consciousness featuring a macroscopic quantum logical structure.

    Science.gov (United States)

    Gunji, Yukio-Pegio; Shinohara, Shuji; Haruna, Taichi; Basios, Vasileios

    2017-02-01

    To overcome the dualism between mind and matter and to implement consciousness in science, a physical entity has to be embedded with a measurement process. Although quantum mechanics have been regarded as a candidate for implementing consciousness, nature at its macroscopic level is inconsistent with quantum mechanics. We propose a measurement-oriented inference system comprising Bayesian and inverse Bayesian inferences. While Bayesian inference contracts probability space, the newly defined inverse one relaxes the space. These two inferences allow an agent to make a decision corresponding to an immediate change in their environment. They generate a particular pattern of joint probability for data and hypotheses, comprising multiple diagonal and noisy matrices. This is expressed as a nondistributive orthomodular lattice equivalent to quantum logic. We also show that an orthomodular lattice can reveal information generated by inverse syllogism as well as the solutions to the frame and symbol-grounding problems. Our model is the first to connect macroscopic cognitive processes with the mathematical structure of quantum mechanics with no additional assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Negative symptoms as key features of depression among cannabis users: a preliminary report.

    Science.gov (United States)

    Bersani, G; Bersani, F S; Caroti, E; Russo, P; Albano, G; Valeriani, G; Imperatori, C; Minichino, A; Manuali, G; Corazza, O

    2016-01-01

    Cannabis use is frequent among depressed patients and may lead to the so-called "amotivational syndrome", which combines symptoms of affective flattening and loss of emotional reactivity (i.e. the so-called "negative" symptomatology). The aim of this study was to investigate the negative symptomatology in depressed patients with concomitant cannabis use disorders (CUDs) in comparison with depressed patients without CUDs. Fifty-one patients with a diagnosis of Major Depressive Disorder (MDD) and concomitant CUD and fifty-one MDD patients were enrolled in the study. The 21-Item Hamilton Depression Rating Scale (HDRS) and the negative symptoms subscales of the Positive and Negative Syndrome Scale (PANSS) were used to assess depressive and negative symptomatology. Patients with cannabis use disorders presented significantly more severe negative symptoms in comparison with patients without cannabis use (15.18 ± 2.25 vs 13.75 ± 2.44; t100 = 3.25 p = 0.002). A deeper knowledge of the "negative" psychopathological profile of MDD patients who use cannabis may lead to novel etiopathogenetic models of MDD and to more appropriate treatment approaches.

  8. Key management and encryption under the bounded storage model.

    Energy Technology Data Exchange (ETDEWEB)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.; Anderson, William Erik

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channel using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.

  9. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    ’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...

  10. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  11. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    ’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified......entertainment for individual game players is to tailor player experience in real-time via automatic game content generation. Modeling the relationship between game content and player preferences or affective states is an important step towards this type of game personalization. In this paper we...... analyse the relationship between level design parameters of platform games and player experience. We introduce a method to extract the most useful information about game content from short game sessions by investigating the size of game session that yields the highest accuracy in predicting players...

  12. Robust Models and Features for Speech Recognition.

    Science.gov (United States)

    1998-03-13

    and relevant Spokes of the Speaker Independent Wall Street Journal database in 1994, the Marketplace database in 1995, and the Broadcast news...also built a 64000 word vocabulary. Lan- guage models for this vocabulary were built from a combination of Wall Street Journal data available from...was made from transcribing clean read speech ( Wall Street Journal task in 1994) to real world speech (transcription of radio and TV broadcast news

  13. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  14. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  15. Key Considerations in the Modeling of Tropical Maritime Microwave Attenuations

    Directory of Open Access Journals (Sweden)

    Yee Hui Lee

    2015-01-01

    Full Text Available This paper presents some key considerations for modeling of over-sea radio-wave propagations in 5 GHz band. The summarized information is based on a series of measurement campaigns which were recently carried out in the tropical maritime environments near Singapore. Multiray propagations and ducting of radio waves have been highlighted and considered in over-sea path loss modeling and prediction. It is noted that the sea-surface reflection is an important contribution in the received field, while the duct layers could enhance the radio-wave propagations. Our studies also show that the refracted ray inside evaporation duct could be a strong ray for short-range near sea-surface applications and needs to be properly evaluated.

  16. Infinite Continuous Feature Model for Psychiatric Comorbidity Analysis.

    Science.gov (United States)

    Valera, Isabel; Ruiz, Francisco J R; Olmos, Pablo M; Blanco, Carlos; Perez-Cruz, Fernando

    2016-02-01

    We aim at finding the comorbidity patterns of substance abuse, mood and personality disorders using the diagnoses from the National Epidemiologic Survey on Alcohol and Related Conditions database. To this end, we propose a novel Bayesian nonparametric latent feature model for categorical observations, based on the Indian buffet process, in which the latent variables can take values between 0 and 1. The proposed model has several interesting features for modeling psychiatric disorders. First, the latent features might be off, which allows distinguishing between the subjects who suffer a condition and those who do not. Second, the active latent features take positive values, which allows modeling the extent to which the patient has that condition. We also develop a new Markov chain Monte Carlo inference algorithm for our model that makes use of a nested expectation propagation procedure.

  17. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    Science.gov (United States)

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  18. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  19. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  20. Reliability and validity of key feature cases for the self-assessment of colon and rectal surgeons.

    Science.gov (United States)

    Trudel, Judith L; Bordage, Georges; Downing, Steven M

    2008-08-01

    The purpose of this study was to determine the reliability and validity of the scores from "key feature" cases in the self-assessment of colon and rectal surgeons. Key feature (KF) cases specifically focus on the assessment of the unique challenges, critical decisions, and difficult aspects of the identification and management of clinical problems in practice. KF cases have been used to assess medical students and residents but rarely for specialists. Responses from all 256 participants taking the American Society of Colon and Rectal Surgeons (ASCRS) Colon and Rectal Surgery Educational Program (CARSEP) V Self-assessment Examination (SAE) from 1997 to 2002 were scored and analyzed, including score reliability, item analysis for the factual (50 multiple-choice questions (MCQ)) and applied (9 KF cases) knowledge portions of the SAE, and the effect of examination preparation, examination setting, specialization, Board certification, and clinical experience on scores. The reliability (Cronbach alpha) of the scores for the MCQ and KF components was 0.97 and 0.95, respectively. The applied KF component of the SAE was more difficult than the factual MCQ component (0.52 versus 0.80, P test at the annual meeting was harder than at home (0.41 versus 0.81, P < 0.001). Content-related validity evidence for the KF cases was supported by mapping KF cases onto the examination blueprint and by judgments from expert colorectal surgeons about the challenging and critical nature of the KFs used. Construct validity of the KF cases was supported by incremental performance related to types of practice (general, anorectal, and colorectal), levels and types of Board certification, and years of clinical experience. The self-assessment of surgical specialists, in this case colorectal surgeons, using KF cases is possible and yielded reliable and valid scores.

  1. Discrete-Feature Model Implementation of SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Geier, Joel

    2010-03-01

    A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as

  2. Key aspects of stratospheric tracer modeling using assimilated winds

    Directory of Open Access Journals (Sweden)

    B. Bregman

    2006-01-01

    Full Text Available This study describes key aspects of global chemistry-transport models and their impact on stratospheric tracer transport. We concentrate on global models that use assimilated winds from numerical weather predictions, but the results also apply to tracer transport in general circulation models. We examined grid resolution, numerical diffusion, air parcel dispersion, the wind or mass flux update frequency, and time interpolation. The evaluation is performed with assimilated meteorology from the "operational analyses or operational data" (OD from the European Centre for Medium-Range Weather Forecasts (ECMWF. We also show the effect of the mass flux update frequency using the ECMWF 40-year re-analyses (ERA40. We applied the three-dimensional chemistry-transport Tracer Model version 5 (TM5 and a trajectory model and performed several diagnoses focusing on different transport regimes. Covering different time and spatial scales, we examined (1 polar vortex dynamics during the Arctic winter, (2 the large-scale stratospheric meridional circulation, and (3 air parcel dispersion in the tropical lower stratosphere. Tracer distributions inside the Arctic polar vortex show considerably worse agreement with observations when the model grid resolution in the polar region is reduced to avoid numerical instability. The results are sensitive to the diffusivity of the advection. Nevertheless, the use of a computational cheaper but diffusive advection scheme is feasible for tracer transport when the horizontal grid resolution is equal or smaller than 1 degree. The use of time interpolated winds improves the tracer distributions, particularly in the middle and upper stratosphere. Considerable improvement is found both in the large-scale tracer distribution and in the polar regions when the update frequency of the assimilated winds is increased from 6 to 3 h. It considerably reduces the vertical dispersion of air parcels in the tropical lower stratosphere. Strong

  3. A Key Generation Model for Improving the Security of Cryptographic ...

    African Journals Online (AJOL)

    In public key cryptography, the security of private keys is very importance, for if ever compromised, it can be used to decrypt secret messages. Conventional methods that use textual passwords, graphical passwords and single modal biometric systems that are used to encryption and protect private keys do not provide ...

  4. Modeling crash injury severity by road feature to improve safety.

    Science.gov (United States)

    Penmetsa, Praveena; Pulugurtha, Srinivas S

    2018-01-02

    The objective of this research is 2-fold: to (a) model and identify critical road features (or locations) based on crash injury severity and compare it with crash frequency and (b) model and identify drivers who are more likely to contribute to crashes by road feature. Crash data from 2011 to 2013 were obtained from the Highway Safety Information System (HSIS) for the state of North Carolina. Twenty-three different road features were considered, analyzed, and compared with each other as well as no road feature. A multinomial logit (MNL) model was developed and odds ratios were estimated to investigate the effect of road features on crash injury severity. Among the many road features, underpass, end or beginning of a divided highway, and on-ramp terminal on crossroad are the top 3 critical road features. Intersection crashes are frequent but are not highly likely to result in severe injuries compared to critical road features. Roundabouts are least likely to result in both severe and moderate injuries. Female drivers are more likely to be involved in crashes at intersections (4-way and T) compared to male drivers. Adult drivers are more likely to be involved in crashes at underpasses. Older drivers are 1.6 times more likely to be involved in a crash at the end or beginning of a divided highway. The findings from this research help to identify critical road features that need to be given priority. As an example, additional advanced warning signs and providing enlarged or highly retroreflective signs that grab the attention of older drivers may help in making locations such as end or beginning of a divided highway much safer. Educating drivers about the necessary skill sets required at critical road features in addition to engineering solutions may further help them adopt safe driving behaviors on the road.

  5. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  6. Spatial uncertainty model for visual features using a Kinect™ sensor.

    Science.gov (United States)

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  7. Towards the maturity model for feature oriented domain analysis

    Directory of Open Access Journals (Sweden)

    Muhammad Javed

    2014-09-01

    Full Text Available Assessing the quality of a model has always been a challenge for researchers in academia and industry. The quality of a feature model is a prime factor because it is used in the development of products. A degraded feature model leads the development of low quality products. Few efforts have been made on improving the quality of feature models. This paper is an effort to present our ongoing work i.e. development of FODA (Feature Oriented Domain Analysis maturity model which will help to evaluate the quality of a given feature model. In this paper, we provide the quality levels along with their descriptions. The proposed model consists of four levels starting from level 0 to level 3. Design of each level is based on the severity of errors, whereas severity of errors decreases from level 0 to level 3. We elaborate each level with the help of examples. We borrowed all examples from the material published by the research community of Software Product Lines (SPL for the application of our framework.

  8. Hole Feature on Conical Face Recognition for Turning Part Model

    Science.gov (United States)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  9. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  10. BioModels: Content, Features, Functionality, and Use

    Science.gov (United States)

    Juty, N; Ali, R; Glont, M; Keating, S; Rodriguez, N; Swat, MJ; Wimalaratne, SM; Hermjakob, H; Le Novère, N; Laibe, C; Chelliah, V

    2015-01-01

    BioModels is a reference repository hosting mathematical models that describe the dynamic interactions of biological components at various scales. The resource provides access to over 1,200 models described in literature and over 140,000 models automatically generated from pathway resources. Most model components are cross-linked with external resources to facilitate interoperability. A large proportion of models are manually curated to ensure reproducibility of simulation results. This tutorial presents BioModels' content, features, functionality, and usage. PMID:26225232

  11. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  12. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  13. A Feature Fusion Based Forecasting Model for Financial Time Series

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  14. Features of Functioning the Integrated Building Thermal Model

    Directory of Open Access Journals (Sweden)

    Morozov Maxim N.

    2017-01-01

    Full Text Available A model of the building heating system, consisting of energy source, a distributed automatic control system, elements of individual heating unit and heating system is designed. Application Simulink of mathematical package Matlab is selected as a platform for the model. There are the specialized application Simscape libraries in aggregate with a wide range of Matlab mathematical tools allow to apply the “acausal” modeling concept. Implementation the “physical” representation of the object model gave improving the accuracy of the models. Principle of operation and features of the functioning of the thermal model is described. The investigations of building cooling dynamics were carried out.

  15. Learning optimized features for hierarchical models of invariant object recognition.

    Science.gov (United States)

    Wersing, Heiko; Körner, Edgar

    2003-07-01

    There is an ongoing debate over the capabilities of hierarchical neural feedforward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense research. We propose a feedforward model for recognition that shares components like weight sharing, pooling stages, and competitive nonlinearities with earlier approaches but focuses on new methods for learning optimal feature-detecting cells in intermediate stages of the hierarchical network. We show that principles of sparse coding, which were previously mostly applied to the initial feature detection stages, can also be employed to obtain optimized intermediate complex features. We suggest a new approach to optimize the learning of sparse features under the constraints of a weight-sharing or convolutional architecture that uses pooling operations to achieve gradual invariance in the feature hierarchy. The approach explicitly enforces symmetry constraints like translation invariance on the feature set. This leads to a dimension reduction in the search space of optimal features and allows determining more efficiently the basis representatives, which achieve a sparse decomposition of the input. We analyze the quality of the learned feature representation by investigating the recognition performance of the resulting hierarchical network on object and face databases. We show that a hierarchy with features learned on a single object data set can also be applied to face recognition without parameter changes and is competitive with other recent machine learning recognition approaches. To investigate the effect of the interplay between sparse coding and processing nonlinearities, we also consider alternative feedforward pooling nonlinearities such as presynaptic maximum selection and sum-of-squares integration. The comparison shows that a combination of strong competitive

  16. Enhanced HMAX model with feedforward feature learning for multiclass categorization

    Directory of Open Access Journals (Sweden)

    Yinlin eLi

    2015-10-01

    Full Text Available In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 milliseconds of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: 1 To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; 2 To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; 3 Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  17. Discriminatively learning for representing local image features with quadruplet model

    Science.gov (United States)

    Zhang, Da-long; Zhao, Lei; Xu, Duan-qing; Lu, Dong-ming

    2017-11-01

    Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of controlling various patch-level computer vision tasks is still an open problem. In this work, we propose a novel deep convolutional neural network (CNN) to learn local feature descriptors. We utilize the quadruplets with positive and negative training samples, together with a constraint to restrict the intra-class variance, to learn good discriminative CNN representations. Compared with previous works, our model reduces the overlap in feature space between corresponding and non-corresponding patch pairs, and mitigates margin varying problem caused by commonly used triplet loss. We demonstrate that our method achieves better embedding result than some latest works, like PN-Net and TN-TG, on benchmark dataset.

  18. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  19. A mouse model of alcoholic liver fibrosis-associated acute kidney injury identifies key molecular pathways

    International Nuclear Information System (INIS)

    Furuya, Shinji; Chappell, Grace A.; Iwata, Yasuhiro; Uehara, Takeki; Kato, Yuki; Kono, Hiroshi; Bataller, Ramon; Rusyn, Ivan

    2016-01-01

    Clinical data strongly indicate that acute kidney injury (AKI) is a critical complication in alcoholic hepatitis, an acute-on-chronic form of liver failure in patients with advanced alcoholic fibrosis. Development of targeted therapies for AKI in this setting is hampered by the lack of an animal model. To enable research into molecular drivers and novel therapies for fibrosis- and alcohol-associated AKI, we aimed to combine carbon tetrachloride (CCl 4 )-induced fibrosis with chronic intra-gastric alcohol feeding. Male C57BL/6J mice were administered a low dose of CCl 4 (0.2 ml/kg 2 × week/6 weeks) followed by alcohol intragastrically (up to 25 g/kg/day for 3 weeks) and with continued CCl 4 . We observed that combined treatment with CCl 4 and alcohol resulted in severe liver injury, more pronounced than using each treatment alone. Importantly, severe kidney injury was evident only in the combined treatment group. This mouse model reproduced distinct pathological features consistent with AKI in human alcoholic hepatitis. Transcriptomic analysis of kidneys revealed profound effects in the combined treatment group, with enrichment for damage-associated pathways, such as apoptosis, inflammation, immune-response and hypoxia. Interestingly, Havcr1 and Lcn2, biomarkers of AKI, were markedly up-regulated. Overall, this study established a novel mouse model of fibrosis- and alcohol-associated AKI and identified key mechanistic pathways. - Highlights: • Acute kidney injury (AKI) is a critical complication in alcoholic hepatitis • We developed a novel mouse model of fibrosis- and alcohol-associated AKI • This model reproduces key molecular and pathological features of human AKI • This animal model can help identify new targeted therapies for alcoholic hepatitis

  20. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  1. Features of CRISPR-Cas Regulation Key to Highly Efficient and Temporally-Specific crRNA Production

    Directory of Open Access Journals (Sweden)

    Andjela Rodic

    2017-11-01

    Full Text Available Bacterial immune systems, such as CRISPR-Cas or restriction-modification (R-M systems, affect bacterial pathogenicity and antibiotic resistance by modulating horizontal gene flow. A model system for CRISPR-Cas regulation, the Type I-E system from Escherichia coli, is silent under standard laboratory conditions and experimentally observing the dynamics of CRISPR-Cas activation is challenging. Two characteristic features of CRISPR-Cas regulation in E. coli are cooperative transcription repression of cas gene and CRISPR array promoters, and fast non-specific degradation of full length CRISPR transcripts (pre-crRNA. In this work, we use computational modeling to understand how these features affect the system expression dynamics. Signaling which leads to CRISPR-Cas activation is currently unknown, so to bypass this step, we here propose a conceptual setup for cas expression activation, where cas genes are put under transcription control typical for a restriction-modification (R-M system and then introduced into a cell. Known transcription regulation of an R-M system is used as a proxy for currently unknown CRISPR-Cas transcription control, as both systems are characterized by high cooperativity, which is likely related to similar dynamical constraints of their function. We find that the two characteristic CRISPR-Cas control features are responsible for its temporally-specific dynamical response, so that the system makes a steep (switch-like transition from OFF to ON state with a time-delay controlled by pre-crRNA degradation rate. We furthermore find that cooperative transcription regulation qualitatively leads to a cross-over to a regime where, at higher pre-crRNA processing rates, crRNA generation approaches the limit of an infinitely abrupt system induction. We propose that these dynamical properties are associated with rapid expression of CRISPR-Cas components and efficient protection of bacterial cells against foreign DNA. In terms of synthetic

  2. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Ashish; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the

  3. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  4. Chronologic model and transgressive-regressive signatures in the late neocene siliciclastic foundation (long key formation) of the Florida keys

    Science.gov (United States)

    Guertin, L.A.; McNeill, D.F.

    1999-01-01

    Recent drilling of continuous cores in southernmost Florida has documented a thick unit of upper Neogene siliciclastics subjacent to surficial shallow-water Quaternary carbonates exposed on islands of the Florida Keys. The siliciclastics comprise the Long Key Formation and were identified in two cores collected from the middle and upper Florida Keys. Achronologic model based on new planktic foraminiferal biochronology and strontium-isotope chronology suggests the timing of siliddastic deposition and provides a basis for regional correlation. The chronologic model, supplemented by vertical trends in quartz grain size, pattern of planktic menardiiform coiling direction, and paleoenvironmental interpretations of benthic foraminiferal assemblages, shows that the Long Key Formation contains three intervals (I-III) of varying thickness, grain-size composition, and paleo-water depth. Interval I is uppermost Miocene. The quartz grains in Interval I fine upward from basal very coarse sand to fine and very fine sand. Benthic foraminifera indicate an upward shift from an outershelf to inner-shelf depositional environment. Interval II, deposited during the late early to early late Pliocene, contains reworked upper Miocene siliciclastics and faunas. In the upper Keys, quartz grains in Interval II range from very coarse sand that fines upward to very fine sand and then coarsens to very coarse and medium sand. In situ benthic faunas indicate an upward shift from outer-shelf to inner-shelf deposition. In the middle Keys, Interval II is different, with the quartz grains ranging primarily from medium to very fine sand. In situ benthic taxa indicate deposition on an inner shelf. In both the middle and upper Keys, the upper Pliocene siliciclastics of Interval III contain quartz grains ranging from very coarse to very fine sands that were deposited on an inner shelf. A sequence boundary between Interval I and Interval II is suggested by: an abrupt shift in the strontium

  5. Key competences in the new ventures: a model for evaluating

    NARCIS (Netherlands)

    Castillo, S.M.; Hormiga-Pérez, E.; Coromina Soler, L.; Valls Pasola, J.

    2010-01-01

    This research studies from an internal view based on the Competency-Based Perspective (CBP), key organizational competencies developed for small new business. CBP is chosen in an attempt to explain the differences characterizing the closed companies from the consolidated ones. The main contribution

  6. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  7. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  8. A System-Level Throughput Model for Quantum Key Distribution

    Science.gov (United States)

    2015-09-17

    credence is given to the monumental task of classical information processing or the time it takes to accomplish relative to quantum transmission. The...object. In quantum entanglement , the physical properties of particle pairs or groups of particles are correlated – the quantum state of each particle...Weihs, ’ Entangled quantum key distribution over two free-space optical links’, Opt. Express, vol. 16, no. 21, p. 16840, 2008. [14] C. Fung, X. Ma and

  9. The Main Shear Zone in Sør Rondane: A key feature for reconstructing the geodynamic evolution of East Antarctica

    Science.gov (United States)

    Ruppel, Antonia; Läufer, Andreas; Lisker, Frank; Jacobs, Joachim; Elburg, Marlina; Damaske, Detlef; Lucka, Nicole

    2013-04-01

    Structural investigations were carried out along the Main Shear Zone (MSZ) of western Sør Rondane (22°-25°E, 71.5°-72.5°S) to gain new information about the position of the East-/West-Gondwana suture and the ancient plate tectonic configuration during Gondwana amalgamation. The WSW-ENE striking MSZ divides south-western Sør Rondane in a northern amphibolite-facies terrane and a southern tonalite-trondhjemite-granodiorite (TTG) terrane. The structure can be traced over a distance of ca. 100 km and reaches several hundred meters in width. It is characterized by a right-lateral sense of movement and marked by a transpressional and also transtensional regime. Ductilely deformed granitoids (ca. 560 Ma: SHRIMP U-Pb of zircon) and ductile - brittle structures, which evolved in a transitional ductile to brittle regime in an undeformed syenite (ca. 499-459 Ma, Ar-Ar mica), provide a late Proterozoic/ early Paleozoic time limit for the activity of the shear zone (Shiraishi et al., 2008; Shiraishi et al., 1997). Documentation of ductile and brittle deformation allows reconstructing up to eight deformation stages. Cross-cutting relationships of structural features mapped in the field complemented by published kinematic data reveal the following relative age succession: [i] Dn+1 - formation of the main foliation during peak metamorphism, [ii] Dn+2 - isoclinal, intrafolial folding of the main foliation, mostly foliation-parallel mylonitic shear zones (1-2 meter thick), [iii] Dn+3 - formation of tight to closed folds, [iv] Dn+4 - formation of relatively upright, large-scale open folds, [v] Dn+5 - granitoid intrusion (e.g. Vengen granite), [vi] Dn+6 - dextral shearing between amphibolite and TTG terranes, formation of the MSZ, [vii] Dn+7 - intrusion of late- to post-tectonic granitoids, first stage of brittle deformation (late shearing along MSZ), intrusion of post-kinematic mafic dykes, [viii] Dn+8 - second stage of brittle deformation including formation of conjugate fault

  10. Exploring key factors in online shopping with a hybrid model.

    Science.gov (United States)

    Chen, Hsiao-Ming; Wu, Chia-Huei; Tsai, Sang-Bing; Yu, Jian; Wang, Jiangtao; Zheng, Yuxiang

    2016-01-01

    Nowadays, the web increasingly influences retail sales. An in-depth analysis of consumer decision-making in the context of e-business has become an important issue for internet vendors. However, factors affecting e-business are complicated and intertwined. To stimulate online sales, understanding key influential factors and causal relationships among the factors is important. To gain more insights into this issue, this paper introduces a hybrid method, which combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) with the analytic network process, called DANP method, to find out the driving factors that influence the online business mostly. By DEMATEL approach the causal graph showed that "online service" dimension has the highest degree of direct impact on other dimensions; thus, the internet vendor is suggested to made strong efforts on service quality throughout the online shopping process. In addition, the study adopted DANP to measure the importance of key factors, among which "transaction security" proves to be the most important criterion. Hence, transaction security should be treated with top priority to boost the online businesses. From our study with DANP approach, the comprehensive information can be visually detected so that the decision makers can spotlight on the root causes to develop effectual actions.

  11. Our energy-Ca2+ signaling deficits hypothesis and its explanatory potential for key features of Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Ming eChen

    2014-12-01

    Full Text Available Alzheimer’s disease (AD has not been explained by any current theories, so new hypotheses are urgently needed. We proposed that energy and Ca2+ signaling deficits are perhaps the earliest modifiable defects in brain aging underlying memory decline and tau deposits (by means of inactivating Ca2+-dependent protease calpain. Consistent with this hypothesis, we now notice that at least eight other known calpain substrates have also been reported to accumulate in aging and AD. Thus, protein accumulation or aggregation is not an accidental or random event, but occurs naturally and selectively to a peculiar family of proteins, corroborating the proposed changes of calpain. Why are only calpain substrates accumulated and how can they stay for decades in the brain without being attacked by many other non-specific proteases there? We believe that these long-lasting puzzles can be explained by calpain’s unique properties, especially its unusual specificity and exclusivity in substrate recognition, which can protect the substrates from other proteases’ attacks after calpain inactivation. Interestingly, the energy-Ca2+ deficits model, in essence, may also explain tau phosphorylation (by calcineurin inactivation and the formation of amyloid plaques. Our studies suggest that α-secretase is an energy-/Ca2+-dual dependent protease and is also the primary determinant for Aβ levels. Finally we discuss why β- and γ-secretases, the current enthusiastic study focuses, are unlikely to be responsible for Aβ genesis or be positively identified by biological laws. Overall, the study suggests that our hypothesis can coherently explain several basic AD features, thus pointing to a new strategy for AD prevention.

  12. Electronic assessment of clinical reasoning in clerkships: A mixed-methods comparison of long-menu key-feature problems with context-rich single best answer questions

    NARCIS (Netherlands)

    Huwendiek, S.; Reichert, F.; Duncker, C.; Leng, B.A. De; Vleuten, C.P.M. van der; Muijtjens, A.M.; Bosse, H.M.; Haag, M.; Hoffmann, G.F.; Tonshoff, B.; Dolmans, D.

    2017-01-01

    BACKGROUND: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students' acceptance, their educational impact, and

  13. Key Challenges and Potential Urban Modelling Opportunities in ...

    African Journals Online (AJOL)

    Urban growth and land use change models, supported by Geographic Information Systems (GIS) software and increased digital data availability, have the ... and opportunities for modelling urban spatial change, with specific reference to the Gauteng City-Region – the heartland of the South African economy and the ...

  14. Key Elements of the Tutorial Support Management Model

    Science.gov (United States)

    Lynch, Grace; Paasuke, Philip

    2011-01-01

    In response to an exponential growth in enrolments the "Tutorial Support Management" (TSM) model has been adopted by Open Universities Australia (OUA) after a two-year project on the provision of online tutor support in first year, online undergraduate units. The essential focus of the TSM model was the development of a systemic approach…

  15. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories.

    Science.gov (United States)

    Lorquet, J C

    2017-04-07

    The atom-diatom interaction is studied by classical mechanics using Jacobi coordinates (R, r, θ). Reactivity criteria that go beyond the simple requirement of transition state theory (i.e., P R* > 0) are derived in terms of specific initial conditions. Trajectories that exactly fulfill these conditions cross the conventional dividing surface used in transition state theory (i.e., the plane in configuration space passing through a saddle point of the potential energy surface and perpendicular to the reaction coordinate) only once. Furthermore, they are observed to be strikingly similar and to form a tightly packed bundle of perfectly collimated trajectories in the two-dimensional (R, r) configuration space, although their angular motion is highly specific for each one. Particular attention is paid to symmetrical transition states (i.e., either collinear or T-shaped with C 2v symmetry) for which decoupling between angular and radial coordinates is observed, as a result of selection rules that reduce to zero Coriolis couplings between modes that belong to different irreducible representations. Liapunov exponents are equal to zero and Hamilton's characteristic function is planar in that part of configuration space that is visited by reactive trajectories. Detailed consideration is given to the concept of average reactive trajectory, which starts right from the saddle point and which is shown to be free of curvature-induced Coriolis coupling. The reaction path Hamiltonian model, together with a symmetry-based separation of the angular degree of freedom, provides an appropriate framework that leads to the formulation of an effective two-dimensional Hamiltonian. The success of the adiabatic approximation in this model is due to the symmetry of the transition state, not to a separation of time scales. Adjacent trajectories, i.e., those that do not exactly fulfill the reactivity conditions have similar characteristics, but the quality of the approximation is lower. At

  16. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories

    Science.gov (United States)

    Lorquet, J. C.

    2017-04-01

    The atom-diatom interaction is studied by classical mechanics using Jacobi coordinates (R, r, θ). Reactivity criteria that go beyond the simple requirement of transition state theory (i.e., PR* > 0) are derived in terms of specific initial conditions. Trajectories that exactly fulfill these conditions cross the conventional dividing surface used in transition state theory (i.e., the plane in configuration space passing through a saddle point of the potential energy surface and perpendicular to the reaction coordinate) only once. Furthermore, they are observed to be strikingly similar and to form a tightly packed bundle of perfectly collimated trajectories in the two-dimensional (R, r) configuration space, although their angular motion is highly specific for each one. Particular attention is paid to symmetrical transition states (i.e., either collinear or T-shaped with C2v symmetry) for which decoupling between angular and radial coordinates is observed, as a result of selection rules that reduce to zero Coriolis couplings between modes that belong to different irreducible representations. Liapunov exponents are equal to zero and Hamilton's characteristic function is planar in that part of configuration space that is visited by reactive trajectories. Detailed consideration is given to the concept of average reactive trajectory, which starts right from the saddle point and which is shown to be free of curvature-induced Coriolis coupling. The reaction path Hamiltonian model, together with a symmetry-based separation of the angular degree of freedom, provides an appropriate framework that leads to the formulation of an effective two-dimensional Hamiltonian. The success of the adiabatic approximation in this model is due to the symmetry of the transition state, not to a separation of time scales. Adjacent trajectories, i.e., those that do not exactly fulfill the reactivity conditions have similar characteristics, but the quality of the approximation is lower. At higher

  17. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  18. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  19. The 2013 European Seismic Hazard Model: key components and results

    OpenAIRE

    Jochen Woessner; Danciu Laurentiu; Domenico Giardini; Helen Crowley; Fabrice Cotton; G. Grünthal; Gianluca Valensise; Ronald Arvidsson; Roberto Basili; Mine Betül Demircioglu; Stefan Hiemer; Carlo Meletti; Roger W. Musson; Andrea N. Rovida; Karin Sesetyan

    2015-01-01

    The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project “Seismic Hazard Harmonization in Europe” (SHARE, 2009–2013). The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the “Global Earthquake Model” initiative. It m...

  20. Modeling photoacoustic spectral features of micron-sized particles.

    Science.gov (United States)

    Strohm, Eric M; Gorelikov, Ivan; Matsuura, Naomi; Kolios, Michael C

    2014-10-07

    The photoacoustic signal generated from particles when irradiated by light is determined by attributes of the particle such as the size, speed of sound, morphology and the optical absorption coefficient. Unique features such as periodically varying minima and maxima are observed throughout the photoacoustic signal power spectrum, where the periodicity depends on these physical attributes. The frequency content of the photoacoustic signals can be used to obtain the physical attributes of unknown particles by comparison to analytical solutions of homogeneous symmetric geometric structures, such as spheres. However, analytical solutions do not exist for irregularly shaped particles, inhomogeneous particles or particles near structures. A finite element model (FEM) was used to simulate photoacoustic wave propagation from four different particle configurations: a homogeneous particle suspended in water, a homogeneous particle on a reflecting boundary, an inhomogeneous particle with an absorbing shell and non-absorbing core, and an irregularly shaped particle such as a red blood cell. Biocompatible perfluorocarbon droplets, 3-5 μm in diameter containing optically absorbing nanoparticles were used as the representative ideal particles, as they are spherical, homogeneous, optically translucent, and have known physical properties. The photoacoustic spectrum of micron-sized single droplets in suspension and on a reflecting boundary were measured over the frequency range of 100-500 MHz and compared directly to analytical models and the FEM. Good agreement between the analytical model, FEM and measured values were observed for a droplet in suspension, where the spectral minima agreed to within a 3.3 MHz standard deviation. For a droplet on a reflecting boundary, spectral features were correctly reproduced using the FEM but not the analytical model. The photoacoustic spectra from other common particle configurations such as particle with an absorbing shell and a

  1. EMF 7 model comparisons: key relationships and parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, B.G.

    1983-12-01

    A simplified textbook model of aggregate demand and supply interprets the similarities and differences in the price and income responses of the various EMF 7 models to oil and policy shocks. The simplified model is a marriage of Hicks' classic IS-LM formulation of the Keynesian theory of effective demand with a rudimentary model of aggregate supply, combining a structural Phillips curve for wage determination and a markup theory of price determination. The reduced-form income equation from the fix-price IS-LM model is used to define an aggregate demand (AD) locus in P-Y space, showing alternative pairs of the implicit GNP deflator and real GNP which would simultaneously satisfy the saving-investment identity and the condition for money market equilibrium. An aggregate supply (AS) schedule is derived by a similar reduction of relations between output and labor demand, unemployment and wage inflation, and the wage-price-productivity nexus governing markup pricing. Given a particular econometric model it is possible to derive IS and LM curves algebraically. The resulting locuses would show alternative combinations of interest rate and real income which equilibrate real income identity on the IS side and the demand and supply of money on the LM side. By further substitution the reduced form fix-price income relation could be obtained for direct quantification of the AD locus. The AS schedule is obtainable by algebraic reduction of the structural supply side equations.

  2. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  3. Culture Models to Define Key Mediators of Cancer Matrix Remodeling

    Directory of Open Access Journals (Sweden)

    Emily Suzanne Fuller

    2014-03-01

    Full Text Available High grade serous epithelial ovarian cancer (HG-SOC is one of the most devastating gynecological cancers affecting women worldwide, with a poor survival rate despite clinical treatment advances. HG-SOC commonly metastasizes within the peritoneal cavity, primarily to the mesothelial cells of the omentum which regulate an extracellular matrix (ECM rich in collagens type I, III and IV along with laminin, vitronectin and fibronectin. Cancer cells depend on their ability to penetrate and invade secondary tissue sites to spread, however a detailed understanding of the molecular mechanisms underlying these processes remain largely unknown. Given the high metastatic potential of HG-SOC and the associated poor clinical outcome, it is extremely important to identify the pathways and the components of which that are responsible for the progression of this disease. In-vitro methods of recapitulating human disease processes are the critical first step in such investigations. In this context, establishment of an in-vitro ‘tumor-like’ microenvironment, such as 3D culture, to study early disease and metastasis of human HG-SOC is an important and highly insightful method. In recent years many such methods have been established to investigate the adhesion and invasion of human ovarian cancer cell lines. The aim of this review is to summarize recent developments in ovarian cancer culture systems and their use to investigate clinically relevant findings concerning the key players in driving human HG-SOC.

  4. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  5. Key Challenges and Potential Urban Modelling Opportunities in ...

    African Journals Online (AJOL)

    Chris Wray

    thus, used to explain and predict land use and transport relationships in urban systems treated earlier as static, but now considered dynamic (Batty, .... People's Republic of China using logistic regression. 3. South Africa Urban Growth Modelling ..... delivery (GDED, 2008; Kekana, 2010). However, after six years and despite ...

  6. Feature and Statistical Model Development in Structural Health Monitoring

    Science.gov (United States)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network

  7. An Appraisal Model Based on a Synthetic Feature Selection Approach for Students’ Academic Achievement

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2017-11-01

    Full Text Available Obtaining necessary information (and even extracting hidden messages from existing big data, and then transforming them into knowledge, is an important skill. Data mining technology has received increased attention in various fields in recent years because it can be used to find historical patterns and employ machine learning to aid in decision-making. When we find unexpected rules or patterns from the data, they are likely to be of high value. This paper proposes a synthetic feature selection approach (SFSA, which is combined with a support vector machine (SVM to extract patterns and find the key features that influence students’ academic achievement. For verifying the proposed model, two databases, namely, “Student Profile” and “Tutorship Record”, were collected from an elementary school in Taiwan, and were concatenated into an integrated dataset based on students’ names as a research dataset. The results indicate the following: (1 the accuracy of the proposed feature selection approach is better than that of the Minimum-Redundancy-Maximum-Relevance (mRMR approach; (2 the proposed model is better than the listing methods when the six least influential features have been deleted; and (3 the proposed model can enhance the accuracy and facilitate the interpretation of the pattern from a hybrid-type dataset of students’ academic achievement.

  8. Modelling energy demand of developing countries: Are the specific features adequately captured?

    International Nuclear Information System (INIS)

    Bhattacharyya, Subhes C.; Timilsina, Govinda R.

    2010-01-01

    This paper critically reviews existing energy demand forecasting methodologies highlighting the methodological diversities and developments over the past four decades in order to investigate whether the existing energy demand models are appropriate for capturing the specific features of developing countries. The study finds that two types of approaches, econometric and end-use accounting, are commonly used in the existing energy demand models. Although energy demand models have greatly evolved since the early seventies, key issues such as the poor-rich and urban-rural divides, traditional energy resources and differentiation between commercial and non-commercial energy commodities are often poorly reflected in these models. While the end-use energy accounting models with detailed sectoral representations produce more realistic projections as compared to the econometric models, they still suffer from huge data deficiencies especially in developing countries. Development and maintenance of more detailed energy databases, further development of models to better reflect developing country context and institutionalizing the modelling capacity in developing countries are the key requirements for energy demand modelling to deliver richer and more reliable input to policy formulation in developing countries.

  9. Duplication Detection When Evolving Feature Models of Software Product Lines

    Directory of Open Access Journals (Sweden)

    Amal Khtira

    2015-10-01

    Full Text Available After the derivation of specific applications from a software product line, the applications keep evolving with respect to new customer’s requirements. In general, evolutions in most industrial projects are expressed using natural language, because it is the easiest and the most flexible way for customers to express their needs. However, the use of this means of communication has shown its limits in detecting defects, such as inconsistency and duplication, when evolving the existing models of the software product line. The aim of this paper is to transform the natural language specifications of new evolutions into a more formal representation using natural language processing. Then, an algorithm is proposed to automatically detect duplication between these specifications and the existing product line feature models. In order to instantiate the proposed solution, a tool is developed to automatize the two operations.

  10. Unsupervised Posture Modeling Based on Spatial-Temporal Movement Features

    Science.gov (United States)

    Yan, Chunjuan

    Traditional posture modeling for human action recognition is based on silhouette segmentation, which is subject to the noise from illumination variation and posture occlusions and shadow interruptions. In this paper, we extract spatial temporal movement features from human actions and adopt unsupervised clustering method for salient posture learning. First, spatial-temporal interest points (STIPs) were extracted according to the properties of human movement, and then, histogram of gradient was built to describe the distribution of STIPs in each frame for a single pose. In addition, the training samples were clustered by non-supervised classification method. Moreover, the salient postures were modeled with GMM according to Expectation Maximization (EM) estimation. The experiment results proved that our method can effectively and accurately recognize human's action postures.

  11. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  12. Analysis of Feature Models Using Alloy: A Survey

    Directory of Open Access Journals (Sweden)

    Anjali Sree-Kumar

    2016-03-01

    Full Text Available Feature Models (FMs are a mechanism to model variability among a family of closely related software products, i.e. a software product line (SPL. Analysis of FMs using formal methods can reveal defects in the specification such as inconsistencies that cause the product line to have no valid products. A popular framework used in research for FM analysis is Alloy, a light-weight formal modeling notation equipped with an efficient model finder. Several works in the literature have proposed different strategies to encode and analyze FMs using Alloy. However, there is little discussion on the relative merits of each proposal, making it difficult to select the most suitable encoding for a specific analysis need. In this paper, we describe and compare those strategies according to various criteria such as the expressivity of the FM notation or the efficiency of the analysis. This survey is the first comparative study of research targeted towards using Alloy for FM analysis. This review aims to identify all the best practices on the use of Alloy, as a part of a framework for the automated extraction and analysis of rich FMs from natural language requirement specifications.

  13. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  14. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  15. Predicting Spatial Distribution of Key Honeybee Pests in Kenya Using Remotely Sensed and Bioclimatic Variables: Key Honeybee Pests Distribution Models

    Directory of Open Access Journals (Sweden)

    David M. Makori

    2017-02-01

    Full Text Available Bee keeping is indispensable to global food production. It is an alternate income source, especially in rural underdeveloped African settlements, and an important forest conservation incentive. However, dwindling honeybee colonies around the world are attributed to pests and diseases whose spatial distribution and influences are not well established. In this study, we used remotely sensed data to improve the reliability of pest ecological niche (EN models to attain reliable pest distribution maps. Occurrence data on four pests (Aethina tumida, Galleria mellonella, Oplostomus haroldi and Varroa destructor were collected from apiaries within four main agro-ecological regions responsible for over 80% of Kenya’s bee keeping. Africlim bioclimatic and derived normalized difference vegetation index (NDVI variables were used to model their ecological niches using Maximum Entropy (MaxEnt. Combined precipitation variables had a high positive logit influence on all remotely sensed and biotic models’ performance. Remotely sensed vegetation variables had a substantial effect on the model, contributing up to 40.8% for G. mellonella and regions with high rainfall seasonality were predicted to be high-risk areas. Projections (to 2055 indicated that, with the current climate change trend, these regions will experience increased honeybee pest risk. We conclude that honeybee pests could be modelled using bioclimatic data and remotely sensed variables in MaxEnt. Although the bioclimatic data were most relevant in all model results, incorporating vegetation seasonality variables to improve mapping the ‘actual’ habitat of key honeybee pests and to identify risk and containment zones needs to be further investigated.

  16. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  17. Keys to the House: Unlocking Residential Savings With Program Models for Home Energy Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Grevatt, Jim [Energy Futures Group (United States); Hoffman, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hoffmeyer, Dale [US Department of Energy, Washington, DC (United States)

    2017-07-05

    After more than 40 years of effort, energy efficiency program administrators and associated contractors still find it challenging to penetrate the home retrofit market, especially at levels commensurate with state and federal goals for energy savings and emissions reductions. Residential retrofit programs further have not coalesced around a reliably successful model. They still vary in design, implementation and performance, and they remain among the more difficult and costly options for acquiring savings in the residential sector. If programs are to contribute fully to meeting resource and policy objectives, administrators need to understand what program elements are key to acquiring residential savings as cost effectively as possible. To that end, the U.S. Department of Energy (DOE) sponsored a comprehensive review and analysis of home energy upgrade programs with proven track records, focusing on those with robustly verified savings and constituting good examples for replication. The study team reviewed evaluations for the period 2010 to 2014 for 134 programs that are funded by customers of investor-owned utilities. All are programs that promote multi-measure retrofits or major system upgrades. We paid particular attention to useful design and implementation features, costs, and savings for nearly 30 programs with rigorous evaluations of performance. This meta-analysis describes program models and implementation strategies for (1) direct install retrofits; (2) heating, ventilating and air-conditioning (HVAC) replacement and early retirement; and (3) comprehensive, whole-home retrofits. We analyze costs and impacts of these program models, in terms of both energy savings and emissions avoided. These program models can be useful guides as states consider expanding their strategies for acquiring energy savings as a resource and for emissions reductions. We also discuss the challenges of using evaluations to create program models that can be confidently applied in

  18. Fibroblast activation protein-α, a stromal cell surface protease, shapes key features of cancer associated fibroblasts through proteome and degradome alterations.

    Science.gov (United States)

    Koczorowska, M M; Tholen, S; Bucher, F; Lutz, L; Kizhakkedathu, J N; De Wever, O; Wellner, U F; Biniossek, M L; Stahl, A; Lassmann, S; Schilling, O

    2016-01-01

    Cancer associated fibroblasts (CAFs) constitute an abundant stromal component of most solid tumors. Fibroblast activation protein (FAP) α is a cell surface protease that is expressed by CAFs. We corroborate this expression profile by immunohistochemical analysis of colorectal cancer specimens. To better understand the tumor-contextual role of FAPα, we investigate how FAPα shapes functional and proteomic features of CAFs using loss- and gain-of function cellular model systems. FAPα activity has a strong impact on the secreted CAF proteome ("secretome"), including reduced levels of anti-angiogenic factors, elevated levels of transforming growth factor (TGF) β, and an impact on matrix processing enzymes. Functionally, FAPα mildly induces sprout formation by human umbilical vein endothelial cells. Moreover, loss of FAPα leads to a more epithelial cellular phenotype and this effect was rescued by exogenous application of TGFβ. In collagen contraction assays, FAPα induced a more contractile cellular phenotype. To characterize the proteolytic profile of FAPα, we investigated its specificity with proteome-derived peptide libraries and corroborated its preference for cleavage carboxy-terminal to proline residues. By "terminal amine labeling of substrates" (TAILS) we explored FAPα-dependent cleavage events. Although FAPα acts predominantly as an amino-dipeptidase, putative FAPα cleavage sites in collagens are present throughout the entire protein length. In contrast, putative FAPα cleavage sites in non-collagenous proteins cluster at the amino-terminus. The degradomic study highlights cell-contextual proteolysis by FAPα with distinct positional profiles. Generally, our findings link FAPα to key aspects of CAF biology and attribute an important role in tumor-stroma interaction to FAPα. Copyright © 2015 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  19. An Efficient Modeling and Simulation of Quantum Key Distribution Protocols Using OptiSystem™

    OpenAIRE

    Abudhahir Buhari,; Zuriati Ahmad Zukarnain; Shamla K. Subramaniam,; Hishamuddin Zainuddin; Suhairi Saharudin

    2012-01-01

    In this paper, we propose a modeling and simulation framework for quantum key distribution protocols using commercial photonic simulator OptiSystem™. This simulation framework emphasize on experimental components of quantum key distribution. We simulate BB84 operation with several security attacks scenario and noise immune key distribution in this work. We also investigate the efficiency of simulator’s in-built photonic components in terms of experimental configuration. This simulation provid...

  20. Structure-function analyses of a PL24 family ulvan lyase reveal key features and suggest its catalytic mechanism.

    Science.gov (United States)

    Ulaganathan, ThirumalaiSelvi; Helbert, William; Kopel, Moran; Banin, Ehud; Cygler, Miroslaw

    2018-01-30

    Ulvan is a major cell wall component of green algae of the genus Ulva and some marine bacteria encode enzymes that can degrade this polysaccharide. The first ulvan degrading lyases have been recently characterized and several putative ulvan lyases have been recombinantly expressed, confirmed as ulvan lyases and partially characterized. Two families of ulvan degrading lyases, PL24 and PL25, have recently been established. The PL24 lyase LOR_107 from the bacterial Alteromonadales sp. strain LOR degrades ulvan endolytically, cleaving the bond at the C4 of a glucuronic acid. However, the mechanism and LOR_107 structural features involved are unknown. We present here the crystal structure of LOR_107, representing the first PL24 family structure. We found that LOR_107 adopts a seven-bladed β-propeller fold with a deep canyon on one side of the protein. Comparative sequence analysis revealed a cluster of conserved residues within this canyon, and site-directed mutagenesis disclosed several residues essential for catalysis. We also found that LOR_107 uses the His/Tyr catalytic mechanism, common to several PL families. We captured a tetrasaccharide substrate in the structures of two inactive mutants, which indicated a two-step binding event, with the first substrate interaction near the top of the canyon coordinated by Arg-320, followed by sliding of the substrate into the canyon toward the active-site residues. Surprisingly, the LOR_107 structure was very similar to that of PL25 family PLSV_3936, despite only ~14% sequence identity between the two enzymes. On the basis of our structural and mutational analyses, we propose a catalytic mechanism for LOR_107 that differs from the typical His/Tyr mechanism. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  1. A signal-detection-based diagnostic-feature-detection model of eyewitness identification.

    Science.gov (United States)

    Wixted, John T; Mickes, Laura

    2014-04-01

    The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced.

  2. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... to the infinite relational model and the infinite Bernoulli mixture model. We find that the model provides a new latent feature representation of structure while in link-prediction performing close to existing models. Our current extension of the notion of communities and collapsed inference to binary latent...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  3. Chromatin extrusion explains key features of loop and domain formation in wild-type and engineered genomes.

    Science.gov (United States)

    Sanborn, Adrian L; Rao, Suhas S P; Huang, Su-Chen; Durand, Neva C; Huntley, Miriam H; Jewett, Andrew I; Bochkov, Ivan D; Chinnappan, Dharmaraj; Cutkosky, Ashok; Li, Jian; Geeting, Kristopher P; Gnirke, Andreas; Melnikov, Alexandre; McKenna, Doug; Stamenova, Elena K; Lander, Eric S; Aiden, Erez Lieberman

    2015-11-24

    We recently used in situ Hi-C to create kilobase-resolution 3D maps of mammalian genomes. Here, we combine these maps with new Hi-C, microscopy, and genome-editing experiments to study the physical structure of chromatin fibers, domains, and loops. We find that the observed contact domains are inconsistent with the equilibrium state for an ordinary condensed polymer. Combining Hi-C data and novel mathematical theorems, we show that contact domains are also not consistent with a fractal globule. Instead, we use physical simulations to study two models of genome folding. In one, intermonomer attraction during polymer condensation leads to formation of an anisotropic "tension globule." In the other, CCCTC-binding factor (CTCF) and cohesin act together to extrude unknotted loops during interphase. Both models are consistent with the observed contact domains and with the observation that contact domains tend to form inside loops. However, the extrusion model explains a far wider array of observations, such as why loops tend not to overlap and why the CTCF-binding motifs at pairs of loop anchors lie in the convergent orientation. Finally, we perform 13 genome-editing experiments examining the effect of altering CTCF-binding sites on chromatin folding. The convergent rule correctly predicts the affected loops in every case. Moreover, the extrusion model accurately predicts in silico the 3D maps resulting from each experiment using only the location of CTCF-binding sites in the WT. Thus, we show that it is possible to disrupt, restore, and move loops and domains using targeted mutations as small as a single base pair.

  4. Self-interacting fields - key feature of the Standard Model of physics Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The messengers of the weak interaction -W and Z particles -were discovered at CERN in 1983.After this breakthrough, LEP mass produced W and Z particles so physicists could study them and make careful measurements.These preci- sion studies have shown that W and Z particles behave very differently to photons,messengers of the electromagnetic interaction.Once emitted by an electrically charged particle, a photon has to terminate its mission at another electrically charged particle.Photons do not mingle with each other.W and Z particles,on the other hand,do.The LEP experiments were the first to see this intermingling of messenger particles.

  5. On-line computer system to minimize laser injuries during surgery: preliminary system layout and proposal of the key features.

    Science.gov (United States)

    Canestri, F

    1999-01-01

    The aim of this paper is to investigate some new user interface ideas and related application packages which aim to improve the degree of safety in an operating room during surgical operations in which an invasive laser beam is deployed. The overall value of the proposition is that a means is provided which ensures the successful completion of the surgical case while minimizing the risk of thermal and mechanical injuries to healthy tissues adjacent to the surgical field. According to surgeons operating with a variety of CO2 lasers available at both the National Cancer Institute in Milan, Italy, and the Sackler School of Medicine, Tel Aviv University, Israel, each laser device presents different cutting and coagulation properties. In order to identify which 'ideal' procedure might corroborate the subjective impression of each surgeon and also to provide one common tool to ensure procedures with a high level of safety, the author has worked for several months with surgeons and technicians of both Institutions to define the general design of a new on-line surgical operation planning and design system to be used during the pre-operative briefing activities and also as a consultation tool during operation. This software package will be developed and tested on both 'C' and FORTRAN compilers running on a commercially available PC which is driving a continuous wave (CW) CO2 laser device via its Instrument Bus interface. The present proposal describes the details of a software package called LCA (Laser-beam Controller and Adviser) which performs several controls in parallel on the key output parameters of a laser beam device during its utilization in delicate surgical operations. The required performances of this device needed during a given surgical operation are pre-simulated and compared against the well-known safety limits, which are stored in the computer's mass storage. If the surgeon's decision about the laser device set-up are considered to be too close to the

  6. Microalgal biohydrogen production considering light energy and mixing time as the two key features for scale-up.

    Science.gov (United States)

    Oncel, S; Sabankay, M

    2012-10-01

    This study focuses on a scale-up procedure considering two vital parameters light energy and mixing for microalgae cultivation, taking Chlamydomonas reinhardtii as the model microorganism. Applying two stage hydrogen production protocol to 1L flat type and 2.5L tank type photobioreactors hydrogen production was investigated with constant light energy and mixing time. The conditions that provide the shortest transfer time to anaerobic culture (light energy; 2.96 kJ s(-1)m(-3) and mixing time; 1 min) and highest hydrogen production rate (light energy; 1.22 kJ s(-1)m(-3) and mixing time; 2.5 min) are applied to 5L photobioreactor. The final hydrogen production for 5L system after 192 h was measured as 195 ± 10 mL that is comparable with the other systems is a good validation for the scale-up procedure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Edge and line feature extraction based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand

    Image segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images.

  8. Modelling Imperfect Product Line Requirements with Fuzzy Feature Diagrams

    NARCIS (Netherlands)

    Noppen, J.A.R.; van den Broek, P.M.; Weston, Nathan; Rashid, Awais

    In this article, we identify that partial, vague and conflicting information can severely limit the effectiveness of approaches that derive feature trees from textual requirement specifications. We examine the impact such imperfect information has on feature tree extraction and we propose the use of

  9. Riparian erosion vulnerability model based on environmental features.

    Science.gov (United States)

    Botero-Acosta, Alejandra; Chu, Maria L; Guzman, Jorge A; Starks, Patrick J; Moriasi, Daniel N

    2017-12-01

    Riparian erosion is one of the major causes of sediment and contaminant load to streams, degradation of riparian wildlife habitats, and land loss hazards. Land and soil management practices are implemented as conservation and restoration measures to mitigate the environmental problems brought about by riparian erosion. This, however, requires the identification of vulnerable areas to soil erosion. Because of the complex interactions between the different mechanisms that govern soil erosion and the inherent uncertainties involved in quantifying these processes, assessing erosion vulnerability at the watershed scale is challenging. The main objective of this study was to develop a methodology to identify areas along the riparian zone that are susceptible to erosion. The methodology was developed by integrating the physically-based watershed model MIKE-SHE, to simulate water movement, and a habitat suitability model, MaxEnt, to quantify the probability of presences of elevation changes (i.e., erosion) across the watershed. The presences of elevation changes were estimated based on two LiDAR-based elevation datasets taken in 2009 and 2012. The changes in elevation were grouped into four categories: low (0.5 - 0.7 m), medium (0.7 - 1.0 m), high (1.0 - 1.7 m) and very high (1.7 - 5.9 m), considering each category as a studied "species". The categories' locations were then used as "species location" map in MaxEnt. The environmental features used as constraints to the presence of erosion were land cover, soil, stream power index, overland flow, lateral inflow, and discharge. The modeling framework was evaluated in the Fort Cobb Reservoir Experimental watershed in southcentral Oklahoma. Results showed that the most vulnerable areas for erosion were located at the upper riparian zones of the Cobb and Lake sub-watersheds. The main waterways of these sub-watersheds were also found to be prone to streambank erosion. Approximatively 80% of the riparian zone (streambank

  10. Projecting biodiversity and wood production in future forest landscapes: 15 key modeling considerations.

    Science.gov (United States)

    Felton, Adam; Ranius, Thomas; Roberge, Jean-Michel; Öhman, Karin; Lämås, Tomas; Hynynen, Jari; Juutinen, Artti; Mönkkönen, Mikko; Nilsson, Urban; Lundmark, Tomas; Nordin, Annika

    2017-07-15

    A variety of modeling approaches can be used to project the future development of forest systems, and help to assess the implications of different management alternatives for biodiversity and ecosystem services. This diversity of approaches does however present both an opportunity and an obstacle for those trying to decide which modeling technique to apply, and interpreting the management implications of model output. Furthermore, the breadth of issues relevant to addressing key questions related to forest ecology, conservation biology, silviculture, economics, requires insights stemming from a number of distinct scientific disciplines. As forest planners, conservation ecologists, ecological economists and silviculturalists, experienced with modeling trade-offs and synergies between biodiversity and wood biomass production, we identified fifteen key considerations relevant to assessing the pros and cons of alternative modeling approaches. Specifically we identified key considerations linked to study question formulation, modeling forest dynamics, forest processes, study landscapes, spatial and temporal aspects, and the key response metrics - biodiversity and wood biomass production, as well as dealing with trade-offs and uncertainties. We also provide illustrative examples from the modeling literature stemming from the key considerations assessed. We use our findings to reiterate the need for explicitly addressing and conveying the limitations and uncertainties of any modeling approach taken, and the need for interdisciplinary research efforts when addressing the conservation of biodiversity and sustainable use of environmental resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Replication of surface features from a master model to an amorphous metallic article

    Science.gov (United States)

    Johnson, William L.; Bakke, Eric; Peker, Atakan

    1999-01-01

    The surface features of an article are replicated by preparing a master model having a preselected surface feature thereon which is to be replicated, and replicating the preselected surface feature of the master model. The replication is accomplished by providing a piece of a bulk-solidifying amorphous metallic alloy, contacting the piece of the bulk-solidifying amorphous metallic alloy to the surface of the master model at an elevated replication temperature to transfer a negative copy of the preselected surface feature of the master model to the piece, and separating the piece having the negative copy of the preselected surface feature from the master model.

  12. Key features for more successful place-based sustainability research on social-ecological systems: a Programme on Ecosystem Change and Society (PECS perspective

    Directory of Open Access Journals (Sweden)

    Patricia Balvanera

    2017-03-01

    Full Text Available The emerging discipline of sustainability science is focused explicitly on the dynamic interactions between nature and society and is committed to research that spans multiple scales and can support transitions toward greater sustainability. Because a growing body of place-based social-ecological sustainability research (PBSESR has emerged in recent decades, there is a growing need to understand better how to maximize the effectiveness of this work. The Programme on Ecosystem Change and Society (PECS provides a unique opportunity for synthesizing insights gained from this research community on key features that may contribute to the relative success of PBSESR. We surveyed the leaders of PECS-affiliated projects using a combination of open, closed, and semistructured questions to identify which features of a research project are perceived to contribute to successful research design and implementation. We assessed six types of research features: problem orientation, research team, and contextual, conceptual, methodological, and evaluative features. We examined the desirable and undesirable aspects of each feature, the enabling factors and obstacles associated with project implementation, and asked respondents to assess the performance of their own projects in relation to these features. Responses were obtained from 25 projects working in 42 social-ecological study cases within 25 countries. Factors that contribute to the overall success of PBSESR included: explicitly addressing integrated social-ecological systems; a focus on solution- and transformation-oriented research; adaptation of studies to their local context; trusted, long-term, and frequent engagement with stakeholders and partners; and an early definition of the purpose and scope of research. Factors that hindered the success of PBSESR included: the complexities inherent to social-ecological systems, the imposition of particular epistemologies and methods on the wider research group

  13. The SSB-positive/SSA-negative antibody profile is not associated with key phenotypic features of Sjögren's syndrome

    DEFF Research Database (Denmark)

    Baer, Alan N; McAdams DeMarco, Mara; Shiboski, Stephen C

    2015-01-01

    phenotypic features. Among SICCA participants classified with SS on the basis of the American-European Consensus Group or American College of Rheumatology criteria, only 2% required the anti-SSB-alone test result to meet these criteria. CONCLUSIONS: The presence of anti-SSB, without anti-SSA antibodies, had...... participants, 2061 (63%) had negative anti-SSA/anti-SSB, 1162 (35%) had anti-SSA with or without anti-SSB, and 74 (2%) anti-SSB alone. Key SS phenotypic features were more prevalent and had measures indicative of greater disease activity in those participants with anti-SSA, either alone or with anti-SSB, than...... in those with anti-SSB alone or negative SSA/SSB serology. These between-group differences were highly significant and not explained by confounding by age, race/ethnicity or gender. Participants with anti-SSB alone were comparable to those with negative SSA/SSB serology in their association with these key...

  14. A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Zou Cairong

    2016-01-01

    Full Text Available The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.

  15. Key features of wave energy.

    Science.gov (United States)

    Rainey, R C T

    2012-01-28

    For a weak point source or dipole, or a small body operating as either, we show that the power from a wave energy converter (WEC) is the product of the particle velocity in the waves, and the wave force (suitably defined). There is a thus a strong analogy with a wind or tidal turbine, where the power is the product of the fluid velocity through the turbine, and the force on it. As a first approximation, the cost of a structure is controlled by the force it has to carry, which governs its strength, and the distance it has to be carried, which governs its size. Thus, WECs are at a disadvantage compared with wind and tidal turbines because the fluid velocities are lower, and hence the forces are higher. On the other hand, the distances involved are lower. As with turbines, the implication is also that a WEC must make the most of its force-carrying ability-ideally, to carry its maximum force all the time, the '100% sweating WEC'. It must be able to limit the wave force on it in larger waves, ultimately becoming near-transparent to them in the survival condition-just like a turbine in extreme conditions, which can stop and feather its blades. A turbine of any force rating can achieve its maximum force in low wind speeds, if its diameter is sufficiently large. This is not possible with a simple monopole or dipole WEC, however, because of the 'nλ/2π' capture width limits. To achieve reasonable 'sweating' in typical wave climates, the force is limited to about 1 MN for a monopole device, or 2 MN for a dipole. The conclusion is that the future of wave energy is in devices that are not simple monopoles or dipoles, but multi-body devices or other shapes equivalent to arrays.

  16. Characteristics of evolving models of care for arthritis: A key informant study

    Directory of Open Access Journals (Sweden)

    Veinot Paula

    2008-07-01

    Full Text Available Abstract Background The burden of arthritis is increasing in the face of diminishing health human resources to deliver care. In response, innovative models of care delivery are developing to facilitate access to quality care. Most models have developed in response to local needs with limited evaluation. The primary objective of this study is to a examine the range of models of care that deliver specialist services using a medical/surgical specialist and at least one other health care provider and b document the strengths and challenges of the identified models. A secondary objective is to identify key elements of best practice models of care for arthritis. Methods Semi-structured interviews were conducted with a sample of key informants with expertise in arthritis from jurisdictions with primarily publicly-funded health care systems. Qualitative data were analyzed using a constant comparative approach to identify common types of models of care, strengths and challenges of models, and key components of arthritis care. Results Seventy-four key informants were interviewed from six countries. Five main types of models of care emerged. 1 Specialized arthritis programs deliver comprehensive, multidisciplinary team care for arthritis. Two models were identified using health care providers (e.g. nurses or physiotherapists in expanded clinical roles: 2 triage of patients with musculoskeletal conditions to the appropriate services including specialists; and 3 ongoing management in collaboration with a specialist. Two models promoting rural access were 4 rural consultation support and 5 telemedicine. Key informants described important components of models of care including knowledgeable health professionals and patients. Conclusion A range of models of care for arthritis have been developed. This classification can be used as a framework for discussing care delivery. Areas for development include integration of care across the continuum, including primary

  17. Local and regional energy companies offering energy services: Key activities and implications for the business model

    International Nuclear Information System (INIS)

    Kindström, Daniel; Ottosson, Mikael

    2016-01-01

    Highlights: • Many companies providing energy services are experiencing difficulties. • This research identifies key activities for the provision of energy services. • Findings are aggregated to the business-model level providing managerial insights. • This research identifies two different business model innovation paths. • Energy companies may need to renew parts of, or the entire, business model. - Abstract: Energy services play a key role in increasing energy efficiency in the industry. The key actors in these services are the local and regional energy companies that are increasingly implementing energy services as part of their market offering and developing service portfolios. Although expectations for energy services have been high, progress has so far been limited, and many companies offering energy services, including energy companies, are experiencing difficulties in implementing energy services and providing them to the market. Overall, this research examines what is needed for local and regional energy companies to successfully implement energy services (and consequently provide them to the market). In doing this, a two-stage process is used: first, we identify key activities for the successful implementation of energy services, and second, we aggregate the findings to the business model level. This research demonstrates that to succeed in implementing energy services, an energy company may need to renew parts or all of its existing product-based business model, formulate a new business model, or develop coexisting multiple business models. By discussing two distinct business model innovation processes, this research demonstrates that there can be different paths to success.

  18. A New Key Predistribution Scheme for Multiphase Sensor Networks Using a New Deployment Model

    Directory of Open Access Journals (Sweden)

    Boqing Zhou

    2014-01-01

    Full Text Available During the lifecycle of sensor networks, making use of the existing key predistribution schemes using deployment knowledge for pairwise key establishment and authentication between nodes, a new challenge is elevated. Either the resilience against node capture attacks or the global connectivity will significantly decrease with time. In this paper, a new deployment model is developed for multiphase deployment sensor networks, and then a new key management scheme is further proposed. Compared with the existing schemes using deployment knowledge, our scheme has better performance in global connectivity, resilience against node capture attacks throughout their lifecycle.

  19. Feature selection and classification model construction on type 2 diabetic patients' data.

    Science.gov (United States)

    Huang, Yue; McCullagh, Paul; Black, Norman; Harper, Roy

    2007-11-01

    Diabetes affects between 2% and 4% of the global population (up to 10% in the over 65 age group), and its avoidance and effective treatment are undoubtedly crucial public health and health economics issues in the 21st century. The aim of this research was to identify significant factors influencing diabetes control, by applying feature selection to a working patient management system to assist with ranking, classification and knowledge discovery. The classification models can be used to determine individuals in the population with poor diabetes control status based on physiological and examination factors. The diabetic patients' information was collected by Ulster Community and Hospitals Trust (UCHT) from year 2000 to 2004 as part of clinical management. In order to discover key predictors and latent knowledge, data mining techniques were applied. To improve computational efficiency, a feature selection technique, feature selection via supervised model construction (FSSMC), an optimisation of ReliefF, was used to rank the important attributes affecting diabetic control. After selecting suitable features, three complementary classification techniques (Naïve Bayes, IB1 and C4.5) were applied to the data to predict how well the patients' condition was controlled. FSSMC identified patients' 'age', 'diagnosis duration', the need for 'insulin treatment', 'random blood glucose' measurement and 'diet treatment' as the most important factors influencing blood glucose control. Using the reduced features, a best predictive accuracy of 95% and sensitivity of 98% was achieved. The influence of factors, such as 'type of care' delivered, the use of 'home monitoring', and the importance of 'smoking' on outcome can contribute to domain knowledge in diabetes control. In the care of patients with diabetes, the more important factors identified: patients' 'age', 'diagnosis duration' and 'family history', are beyond the control of physicians. Treatment methods such as 'insulin', 'diet

  20. Chemical Structure Identification in Metabolomics: Computational Modeling of Experimental Features

    Directory of Open Access Journals (Sweden)

    Lochana C Menikarachchi

    2013-02-01

    Full Text Available The identification of compounds in complex mixtures remains challenging despite recent advances in analytical techniques. At present, no single method can detect and quantify the vast array of compounds that might be of potential interest in metabolomics studies. High performance liquid chromatography/mass spectrometry (HPLC/MS is often considered the analytical method of choice for analysis of biofluids. The positive identification of an unknown involves matching at least two orthogonal HPLC/MS measurements (exact mass, retention index, drift time etc. against an authentic standard. However, due to the limited availability of authentic standards, an alternative approach involves matching known and measured features of the unknown compound with computationally predicted features for a set of candidate compounds downloaded from a chemical database. Computationally predicted features include retention index, ECOM50 (energy required to decompose 50% of a selected precursor ion in a collision induced dissociation cell, drift time, whether the unknown compound is biological or synthetic and a collision induced dissociation (CID spectrum. Computational predictions are used to filter the initial “bin” of candidate compounds. The final output is a ranked list of candidates that best match the known and measured features. In this mini review, we discuss cheminformatics methods underlying this database search-filter identification approach.

  1. Quantification of key parameters for treating contrails in a large scale climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ponater, M.; Gierens, K. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-01

    The general objective of this project, to determine contrail key parameters with respect to their climate effect, has been approached by three tasks: (1) quantification of microphysical key parameters, (2) development of a contrail coverage parametrization for climate models, and (3) determination of the worldwide coverage with persistent contrails due to present day air traffic. The microphysical key parameters are determined using microphysical box model simulations. The contrail parametrization was achieved by deriving (from aircraft measurements) the instantaneous fluctuations of temperature and relative humidity that occur on spatial scales beyond the resolution of climate models. The global and annual mean coverage by persistent contrails was calculated from ECMWF numerical analyses and from actual air traffic density. It was found to be currently about 0.1%, though the atmosphere has the potential to form persistent contrails over a much larger area. (orig.) 144 figs., 42 tabs., 497 refs.

  2. Key-Aspects of Scientific Modeling Exemplified by School Science Models: Some Units for Teaching Contextualized Scientific Methodology

    Science.gov (United States)

    Develaki, Maria

    2016-01-01

    Models and modeling are core elements of scientific methods and consequently also are of key importance for the conception and teaching of scientific methodology. The epistemology of models and its transfer and adaption to nature of science education are not, however, simple themes. We present some conceptual units in which school science models…

  3. Key factors regulating the mass delivery of macromolecules to model cell membranes

    DEFF Research Database (Denmark)

    Campbell, Richard A.; Watkins, Erik B.; Jagalski, Vivien

    2014-01-01

    We show that both gravity and electrostatics are key factors regulating interactions between model cell membranes and self-assembled liquid crystalline aggregates of dendrimers and phospholipids. The system is a proxy for the trafficking of reservoirs of therapeutic drugs to cell membranes for sl...... of the aggregates to activate endocytosis pathways on specific cell types is discussed in the context of targeted drug delivery applications.......We show that both gravity and electrostatics are key factors regulating interactions between model cell membranes and self-assembled liquid crystalline aggregates of dendrimers and phospholipids. The system is a proxy for the trafficking of reservoirs of therapeutic drugs to cell membranes for slow...

  4. Feature and Model Selection in Feedforward Neural Networks

    Science.gov (United States)

    1994-06-01

    output of middle nodej - M is the number of feature inputs -wij is the weight from input node i to middle node j - e0 is the input layer bias term, and is...is the updated weight from input i to middle node j - (wtuj) is the old weight from from input i to middle nodej - q7 is the step size - 62 = (d

  5. Operational Details of the Five Domains Model and Its Key Applications to the Assessment and Management of Animal Welfare

    Science.gov (United States)

    Mellor, David J.

    2017-01-01

    Simple Summary The Five Domains Model is a focusing device to facilitate systematic, structured, comprehensive and coherent assessment of animal welfare; it is not a definition of animal welfare, nor is it intended to be an accurate representation of body structure and function. The purpose of each of the five domains is to draw attention to areas that are relevant to both animal welfare assessment and management. This paper begins by briefly describing the major features of the Model and the operational interactions between the five domains, and then it details seven interacting applications of the Model. These underlie its utility and increasing application to welfare assessment and management in diverse animal use sectors. Abstract In accord with contemporary animal welfare science understanding, the Five Domains Model has a significant focus on subjective experiences, known as affects, which collectively contribute to an animal’s overall welfare state. Operationally, the focus of the Model is on the presence or absence of various internal physical/functional states and external circumstances that give rise to welfare-relevant negative and/or positive mental experiences, i.e., affects. The internal states and external circumstances of animals are evaluated systematically by referring to each of the first four domains of the Model, designated “Nutrition”, “Environment”, “Health” and “Behaviour”. Then affects, considered carefully and cautiously to be generated by factors in these domains, are accumulated into the fifth domain, designated “Mental State”. The scientific foundations of this operational procedure, published in detail elsewhere, are described briefly here, and then seven key ways the Model may be applied to the assessment and management of animal welfare are considered. These applications have the following beneficial objectives—they (1) specify key general foci for animal welfare management; (2) highlight the foundations of

  6. Password-only authenticated three-party key exchange with provable security in the standard model.

    Science.gov (United States)

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  7. Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model

    Directory of Open Access Journals (Sweden)

    Junghyun Nam

    2014-01-01

    Full Text Available Protocols for password-only authenticated key exchange (PAKE in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000, which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  8. Salient Features of the Harnischfeger-Wiley Model

    Science.gov (United States)

    Hallinan, Maureen T.

    1976-01-01

    Explicates the Harnischfeger-Wiley model and points out its properties, underlying assumptions, and location in the literature on achievement. It also describes and critiques an empirical test by Harnischfeger and Wiley of their model. (Author/IRT)

  9. Cluster regression model and level fluctuation features of Van Lake, Turkey

    Directory of Open Access Journals (Sweden)

    Z. Şen

    Full Text Available Lake water levels change under the influences of natural and/or anthropogenic environmental conditions. Among these influences are the climate change, greenhouse effects and ozone layer depletions which are reflected in the hydrological cycle features over the lake drainage basins. Lake levels are among the most significant hydrological variables that are influenced by different atmospheric and environmental conditions. Consequently, lake level time series in many parts of the world include nonstationarity components such as shifts in the mean value, apparent or hidden periodicities. On the other hand, many lake level modeling techniques have a stationarity assumption. The main purpose of this work is to develop a cluster regression model for dealing with nonstationarity especially in the form of shifting means. The basis of this model is the combination of transition probability and classical regression technique. Both parts of the model are applied to monthly level fluctuations of Lake Van in eastern Turkey. It is observed that the cluster regression procedure does preserve the statistical properties and the transitional probabilities that are indistinguishable from the original data.

    Key words. Hydrology (hydrologic budget; stochastic processes · Meteorology and atmospheric dynamics (ocean-atmosphere interactions

  10. Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model

    Science.gov (United States)

    Xu, Y.; Jones, A. D.; Rhoades, A.

    2017-12-01

    Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for

  11. Valuing snorkeling visits to the Florida Keys with stated and revealed preference models

    Science.gov (United States)

    Timothy Park; J. Michael Bowker; Vernon R. Leeworthy

    2002-01-01

    Coastal coral reefs, especially in the Florida Keys, are declining at a disturbing rate. Marine ecologists and reef scientists have emphasized the importance of establishing nonmarket values of coral reefs to assess the cost effectiveness of coral reef management and remediation programs. The purpose of this paper is to develop a travel cost--contingent valuation model...

  12. Features of Balance Model Development of Exclave Region

    Directory of Open Access Journals (Sweden)

    Timur Rustamovich Gareev

    2015-06-01

    Full Text Available In the article, the authors build a balance model for an exclave region. The aim of the work is to explore the unique properties of exclaves to evaluate the possibility of development of a more complex model for the economy of a region. Exclaves are strange phenomena in both theoretical and practical regional economy. There is lack of comparative models, so it is typically quite challenging to study exclaves. At the same time, exclaves produce better statistics, which gives more careful consideration of cross-regional economic flows. The authors discuss methodologies of model-based regional development forecasting. They analyze balance approach on a more general level of regional governance and individually, on the example of specific territories. Thus, they identify and explain the need to develop balance approach models fitted to the special needs of certain territories. By combining regional modeling for an exclave with traditional balance and simulation-based methods and event-based approach, they come up with a more detailed model for the economy of a region. Having taken one Russian exclave as an example, the authors have developed a simulation event-based long-term sustainability model. In the article, they provide the general characteristics of the model, describe its components, and simulation algorithm. The approach introduced in this article combines the traditional balance models and the peculiarities of an exclave region to develop a holistic regional economy model (with the Kaliningrad region serving as an example. It is important to underline that the resulting model helps to evaluate the degree of influence of preferential economic regimes (such as Free Customs Zone, for example on the economy of a region.

  13. Nine key principles to guide youth mental health: development of service models in New South Wales.

    Science.gov (United States)

    Howe, Deborah; Batchelor, Samantha; Coates, Dominiek; Cashman, Emma

    2014-05-01

    Historically, the Australian health system has failed to meet the needs of young people with mental health problems and mental illness. In 2006, New South Wales (NSW) Health allocated considerable funds to the reform agenda of mental health services in NSW to address this inadequacy. Children and Young People's Mental Health (CYPMH), a service that provides mental health care for young people aged 12-24 years, with moderate to severe mental health problems, was chosen to establish a prototype Youth Mental Health (YMH) Service Model for NSW. This paper describes nine key principles developed by CYPMH to guide the development of YMH Service Models in NSW. A literature review, numerous stakeholder consultations and consideration of clinical best practice were utilized to inform the development of the key principles. Subsequent to their development, the nine key principles were formally endorsed by the Mental Health Program Council to ensure consistency and monitor the progress of YMH services across NSW. As a result, between 2008 and 2012 YMH Services across NSW regularly reported on their activities against each of the nine key principles demonstrating how each principle was addressed within their service. The nine key principles provide mental health services a framework for how to reorient services to accommodate YMH and provide a high-quality model of care. [Corrections added on 29 November 2013, after first online publication: The last two sentences of the Results section have been replaced with "As a result, between 2008 and 2012 YMH Services across NSW regularly reported on their activities against each of the nine key principles demonstrating how each principle was addressed within their service."]. © 2013 Wiley Publishing Asia Pty Ltd.

  14. Program Manager: Modeling and Simulation Feature Issue, September - October 1997

    Science.gov (United States)

    1997-10-01

    emerge for representing virtual prototypes. For example, modelers explicitly designed Virtual Reality Modeling Language ( VRML ) to pro- duce virtual... VRML viewers to display a wide spec- trum of standard data types. This particular user interface proto- type lacks the elements for controlling...actuation devices, and advanced tech- nology insertion. In this example, engineers used VRML to produce the visual appearance of the satellite, with the

  15. Extraction of terrain features from digital elevation models

    Science.gov (United States)

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  16. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  17. Correlation between clinical and histological features in a pig model of choroidal neovascularization

    DEFF Research Database (Denmark)

    Lassota, Nathan; Kiilgaard, Jens Folke; Prause, Jan Ulrik

    2006-01-01

    To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features.......To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features....

  18. Design models as emergent features: An empirical study in communication and shared mental models in instructional

    Directory of Open Access Journals (Sweden)

    Lucca Botturi

    2006-06-01

    Full Text Available This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-learning unit. The teams declared they were using the same fast-prototyping design and development model, and were composed of the same roles (although with a different number of SMEs. Results indicate that the design and development model actually informs the activities of the group, but that it is interpreted and adapted by the team for the specific project. Thus, the actual practice model of each team can be regarded as an emergent feature. This analysis delivers insights concerning issues about team communication, shared understanding, individual perspectives and the implementation of prescriptive instructional design models.

  19. Structural and Molecular Modeling Features of P2X Receptors

    Directory of Open Access Journals (Sweden)

    Luiz Anastacio Alves

    2014-03-01

    Full Text Available Currently, adenosine 5'-triphosphate (ATP is recognized as the extracellular messenger that acts through P2 receptors. P2 receptors are divided into two subtypes: P2Y metabotropic receptors and P2X ionotropic receptors, both of which are found in virtually all mammalian cell types studied. Due to the difficulty in studying membrane protein structures by X-ray crystallography or NMR techniques, there is little information about these structures available in the literature. Two structures of the P2X4 receptor in truncated form have been solved by crystallography. Molecular modeling has proven to be an excellent tool for studying ionotropic receptors. Recently, modeling studies carried out on P2X receptors have advanced our knowledge of the P2X receptor structure-function relationships. This review presents a brief history of ion channel structural studies and shows how modeling approaches can be used to address relevant questions about P2X receptors.

  20. Improvements and new features in the IRI-2000 model

    International Nuclear Information System (INIS)

    Bilitza, D.

    2002-01-01

    This paper describes the changes that were implemented in the new version of the COSPAR/URSI International Reference Ionosphere (IRI-2000). These changes are: (1) two new options for the electron density in the D-region, (2) a better functional description of the electron density in the E-F merging region, (3) inclusion of the F1 layer occurrence probability as a new parameter, (4) a new model for the bottomside parameters B 0 and B 1 that greatly improves the representation at low and equatorial latitudes during high solar activities, (5) inclusion of a model for foF2 storm-time updating, (6) a new option for the electron temperature in the topside ionosphere, and (7) inclusion of a model for the equatorial F region ion drift. The main purpose of this paper is to provide the IRI users with examples of the effects of these changes. (author)

  1. Mapping three-dimensional geological features from remotely-sensed images and digital elevation models

    Science.gov (United States)

    Morris, Kevin Peter

    Accurate mapping of geological structures is important in numerous applications, ranging from mineral exploration through to hydrogeological modelling. Remotely sensed data can provide synoptic views of study areas enabling mapping of geological units within the area. Structural information may be derived from such data using standard manual photo-geologic interpretation techniques, although these are often inaccurate and incomplete. The aim of this thesis is, therefore, to compile a suite of automated and interactive computer-based analysis routines, designed to help a the user map geological structure. These are examined and integrated in the context of an expert system. The data used in this study include Digital Elevation Model (DEM) and Airborne Thematic Mapper images, both with a spatial resolution of 5m, for a 5 x 5 km area surrounding Llyn Cow lyd, Snowdonia, North Wales. The geology of this area comprises folded and faulted Ordo vician sediments intruded throughout by dolerite sills, providing a stringent test for the automated and semi-automated procedures. The DEM is used to highlight geomorphological features which may represent surface expressions of the sub-surface geology. The DEM is created from digitized contours, for which kriging is found to provide the best interpolation routine, based on a number of quantitative measures. Lambertian shading and the creation of slope and change of slope datasets are shown to provide the most successful enhancement of DEMs, in terms of highlighting a range of key geomorphological features. The digital image data are used to identify rock outcrops as well as lithologically controlled features in the land cover. To this end, a series of standard spectral enhancements of the images is examined. In this respect, the least correlated 3 band composite and a principal component composite are shown to give the best visual discrimination of geological and vegetation cover types. Automatic edge detection (followed by line

  2. THE FEATURES OF INNOVATIVE ACTIVITY UNDER THE OPEN INNOVATION MODEL

    Directory of Open Access Journals (Sweden)

    Julia P. Kulikova

    2014-01-01

    Full Text Available The article discusses the distinctive characteristics of open and closed models of functioning of the innovation sphere. Justified the use of interaction marketing approach to relationship management of innovation sphere. Two sets of marketing functions - network and process for the effective functioning of innovation networks. Given matrix scorecard marketing functions in the innovation network.

  3. Features of optical modeling in educational and scientific activity ...

    African Journals Online (AJOL)

    The article discusses the functionality of existing software for the modeling, analysis and optimization of lighting systems and optical elements, through which the stage of their design can be automated completely. The use of these programs is shown using the example of scientific work and the educational activity of ...

  4. The features of modelling semiconductor lasers with a wide contact

    Directory of Open Access Journals (Sweden)

    Rzhanov Alexey

    2017-01-01

    Full Text Available The aspects of calculating the dynamics and statics of powerful semiconductor laser diodes radiation are investigated. It takes into account the main physical mechanisms influencing power, spectral composition, far and near field of laser radiation. It outlines a dynamic distributed model of a semiconductor laser with a wide contact and possible algorithms for its implementation.

  5. Energy Demand Modeling Methodology of Key State Transitions of Turning Processes

    Directory of Open Access Journals (Sweden)

    Shun Jia

    2017-04-01

    Full Text Available Energy demand modeling of machining processes is the foundation of energy optimization. Energy demand of machining state transition is integral to the energy requirements of the machining process. However, research focus on energy modeling of state transition is scarce. To fill this gap, an energy demand modeling methodology of key state transitions of the turning process is proposed. The establishment of an energy demand model of state transition could improve the accuracy of the energy model of the machining process, which also provides an accurate model and reliable data for energy optimization of the machining process. Finally, case studies were conducted on a CK6153i CNC lathe, the results demonstrating that predictive accuracy with the proposed method is generally above 90% for the state transition cases.

  6. Feature-Enhanced, Model-Based Sparse Aperture Imaging

    Science.gov (United States)

    2008-03-01

    information about angle-dependent scattering. Methods employing subaperture analysis and parametric models expect to find contiguous intervals in θ for...transform, which is not a transform in the strict sense, but a method in image analysis for detecting straight lines in binary images [12], uses a ρ-θ...We explore the application of a homotopy continuation-based method for sparse signal representation in overcomplete dictio- naries. Our problem setup

  7. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  8. A method for automatic feature points extraction of human vertebrae three-dimensional model

    Science.gov (United States)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  9. A decision support model for identification and prioritization of key performance indicators in the logistics industry

    OpenAIRE

    Kucukaltan, Berk; Irani, Zahir; Aktas, Emel

    2016-01-01

    Performance measurement of logistics companies is based upon various performance indicators. Yet, in the logistics industry, there are several vaguenesses, such as deciding on key indicators and determining interrelationships between performance indicators. In order to resolve these vaguenesses, this paper first presents the stakeholder-informed Balanced Scorecard (BSC) model, by incorporating financial (e.g. cost) and non-financial (e.g. social media) performance indicators, with a comprehen...

  10. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  11. Transposition of the Tourist-MITE mPing in yeast: an assay that retains key features of catalysis by the class 2 PIF/Harbinger superfamily

    Directory of Open Access Journals (Sweden)

    Hancock C Nathan

    2010-02-01

    Full Text Available Abstract Background PIF/Harbinger is the most recently discovered DNA transposon superfamily and is now known to populate genomes from fungi to plants to animals. Mobilization of superfamily members requires two separate element-encoded proteins (ORF1 and TPase. Members of this superfamily also mobilize Tourist-like miniature inverted repeat transposable elements (MITEs, which are the most abundant transposable elements associated with the genes of plants, especially the cereal grasses. The phylogenetic analysis of many plant genomes indicates that MITEs can amplify rapidly from one or a few elements to hundreds or thousands. The most active DNA transposon identified to date in plants or animals is mPing, a rice Tourist-like MITE that is a deletion derivative of the autonomous Ping element. Ping and the closely related Pong are the only known naturally active PIF/Harbinger elements. Some rice strains accumulate ~40 new mPing insertions per plant per generation. In this study we report the development of a yeast transposition assay as a first step in deciphering the mechanism underlying the amplification of Tourist-MITEs. Results The ORF1 and TPase proteins encoded by Ping and Pong have been shown to mobilize mPing in rice and in transgenic Arabidopsis. Initial tests of the native proteins in a yeast assay resulted in very low transposition. Significantly higher activities were obtained by mutation of a putative nuclear export signal (NES in the TPase that increased the amount of TPase in the nucleus. When introduced into Arabidopsis, the NES mutant protein also catalyzed higher frequencies of mPing excision from the gfp reporter gene. Our yeast assay retains key features of excision and insertion of mPing including precise excision, extended insertion sequence preference, and a requirement for two proteins that can come from either Ping or Pong or both elements. Conclusions The yeast transposition assay provides a robust platform for analysis of

  12. PrEP for key populations in combination HIV prevention in Nairobi: a mathematical modelling study.

    Science.gov (United States)

    Cremin, Ide; McKinnon, Lyle; Kimani, Joshua; Cherutich, Peter; Gakii, Gloria; Muriuki, Festus; Kripke, Katharine; Hecht, Robert; Kiragu, Michael; Smith, Jennifer; Hinsley, Wes; Gelmon, Lawrence; Hallett, Timothy B

    2017-05-01

    The HIV epidemic in the population of Nairobi as a whole is in decline, but a concentrated sub-epidemic persists in key populations. We aimed to identify an optimal portfolio of interventions to reduce HIV incidence for a given budget and to identify the circumstances in which pre-exposure prophylaxis (PrEP) could be used in Nairobi, Kenya. A mathematical model was developed to represent HIV transmission in specific key populations (female sex workers, male sex workers, and men who have sex with men [MSM]) and among the wider population of Nairobi. The scale-up of existing interventions (condom promotion, antiretroviral therapy, and male circumcision) for key populations and the wider population as have occurred in Nairobi is represented. The model includes a detailed representation of a PrEP intervention and is calibrated to prevalence and incidence estimates specific to key populations and the wider population. In the context of a declining epidemic overall but with a large sub-epidemic in MSM and male sex workers, an optimal prevention portfolio for Nairobi should focus on condom promotion for male sex workers and MSM in particular, followed by improved antiretroviral therapy retention, earlier antiretroviral therapy, and male circumcision as the budget allows. PrEP for male sex workers could enter an optimal portfolio at similar levels of spending to when earlier antiretroviral therapy is included; however, PrEP for MSM and female sex workers would be included only at much higher budgets. If PrEP for male sex workers cost as much as US$500, average annual spending on the interventions modelled would need to be less than $3·27 million for PrEP for male sex workers to be excluded from an optimal portfolio. Estimated costs per infection averted when providing PrEP to all female sex workers regardless of their risk of infection, and to high-risk female sex workers only, are $65 160 (95% credible interval [CrI] $43 520-$90 250) and $10 920 (95% CrI $4700

  13. Impact of SLA assimilation in the Sicily Channel Regional Model: model skills and mesoscale features

    Directory of Open Access Journals (Sweden)

    A. Olita

    2012-07-01

    Full Text Available The impact of the assimilation of MyOcean sea level anomalies along-track data on the analyses of the Sicily Channel Regional Model was studied. The numerical model has a resolution of 1/32° degrees and is capable to reproduce mesoscale and sub-mesoscale features. The impact of the SLA assimilation is studied by comparing a simulation (SIM, which does not assimilate data with an analysis (AN assimilating SLA along-track multi-mission data produced in the framework of MyOcean project. The quality of the analysis was evaluated by computing RMSE of the misfits between analysis background and observations (sea level before assimilation. A qualitative evaluation of the ability of the analyses to reproduce mesoscale structures is accomplished by comparing model results with ocean colour and SST satellite data, able to detect such features on the ocean surface. CTD profiles allowed to evaluate the impact of the SLA assimilation along the water column. We found a significant improvement for AN solution in terms of SLA RMSE with respect to SIM (the averaged RMSE of AN SLA misfits over 2 years is about 0.5 cm smaller than SIM. Comparison with CTD data shows a questionable improvement produced by the assimilation process in terms of vertical features: AN is better in temperature while for salinity it gets worse than SIM at the surface. This suggests that a better a-priori description of the vertical error covariances would be desirable. The qualitative comparison of simulation and analyses with synoptic satellite independent data proves that SLA assimilation allows to correctly reproduce some dynamical features (above all the circulation in the Ionian portion of the domain and mesoscale structures otherwise misplaced or neglected by SIM. Such mesoscale changes also infer that the eddy momentum fluxes (i.e. Reynolds stresses show major changes in the Ionian area. Changes in Reynolds stresses reflect a different pumping of eastward momentum from the eddy to

  14. Feature Classification for Robust Shape-Based Collaborative Tracking and Model Updating

    Directory of Open Access Journals (Sweden)

    C. S. Regazzoni

    2008-09-01

    Full Text Available A new collaborative tracking approach is introduced which takes advantage of classified features. The core of this tracker is a single tracker that is able to detect occlusions and classify features contributing in localizing the object. Features are classified in four classes: good, suspicious, malicious, and neutral. Good features are estimated to be parts of the object with a high degree of confidence. Suspicious ones have a lower, yet significantly high, degree of confidence to be a part of the object. Malicious features are estimated to be generated by clutter, while neutral features are characterized with not a sufficient level of uncertainty to be assigned to the tracked object. When there is no occlusion, the single tracker acts alone, and the feature classification module helps it to overcome distracters such as still objects or little clutter in the scene. When more than one desired moving objects bounding boxes are close enough, the collaborative tracker is activated and it exploits the advantages of the classified features to localize each object precisely as well as updating the objects shape models more precisely by assigning again the classified features to the objects. The experimental results show successful tracking compared with the collaborative tracker that does not use the classified features. Moreover, more precise updated object shape models will be shown.

  15. UNIVERSITY INNOVATION INFRASTRUCTURE MODEL AS A KEY PART OF A TERRITORAL CLUST

    Directory of Open Access Journals (Sweden)

    Nataliya P. Ivashchenko

    2015-01-01

    Full Text Available Over the recent decades there have been increasing efforts by developing countries to reduce the economic gap between developed and developing countries. Asian and Northern European countries demonstrate good progress in these areas.Sweden,Denmark,Chinashow stable high economic indicators that have been achieved by targeted government programs. These programs were aimed at creating a new type of economy based on knowledge and new technologies. Given the success of these countries, a number of developing countries, whose economies are dependent on resources, today, are looking to repeat their way; those countries areRussia,Indonesia,BrazilandChile. The modernization of the economy and the formation of innovative economy are key objectives of the state policies of these countries. The research by leading economists and scientists led to the conclusion that the regional level of national economy plays a key role in formation of knowledgebase economy, which indicates the need to differentiate the innovation policy of the state depending on the economy parameters of each region. This paper presents a model of the first stage of the formation of the entrepreneurialuniversityUniversityinnovation infrastructure model, which is a key part of a territoral cluster. The article consists of five parts. The first part covers the analysis of the two main models of regional development: clustering theory and Triple Helix. This section describes a positive result, which is achieved by using these models simultaneously. The second part of the article shows the importance and the role of the entrepreneurial university in the formation of innovative clusters. It will be explained how and under what conditions this formation is achieved. The third part of this paper will present University innovation infrastructure model. The fourth part will examine the practical first steps to create a cluster "Vorob’evi Gori" on the basis of theMoscowStateUniversity. The fifth

  16. Key role of local regulation in chemosensing revealed by a new molecular interaction-based modeling method.

    Directory of Open Access Journals (Sweden)

    Martin Meier-Schellersheim

    2006-07-01

    Full Text Available The signaling network underlying eukaryotic chemosensing is a complex combination of receptor-mediated transmembrane signals, lipid modifications, protein translocations, and differential activation/deactivation of membrane-bound and cytosolic components. As such, it provides particularly interesting challenges for a combined computational and experimental analysis. We developed a novel detailed molecular signaling model that, when used to simulate the response to the attractant cyclic adenosine monophosphate (cAMP, made nontrivial predictions about Dictyostelium chemosensing. These predictions, including the unexpected existence of spatially asymmetrical, multiphasic, cyclic adenosine monophosphate-induced PTEN translocation and phosphatidylinositol-(3,4,5P3 generation, were experimentally verified by quantitative single-cell microscopy leading us to propose significant modifications to the current standard model for chemoattractant-induced biochemical polarization in this organism. Key to this successful modeling effort was the use of "Simmune," a new software package that supports the facile development and testing of detailed computational representations of cellular behavior. An intuitive interface allows user definition of complex signaling networks based on the definition of specific molecular binding site interactions and the subcellular localization of molecules. It automatically translates such inputs into spatially resolved simulations and dynamic graphical representations of the resulting signaling network that can be explored in a manner that closely parallels wet lab experimental procedures. These features of Simmune were critical to the model development and analysis presented here and are likely to be useful in the computational investigation of many aspects of cell biology.

  17. Bankruptcy prediction using SVM models with a new approach to combine features selection and parameter optimisation

    Science.gov (United States)

    Zhou, Ligang; Keung Lai, Kin; Yen, Jerome

    2014-03-01

    Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.

  18. Identification and Development of Key Talents through Competency Modelling in Agriculture Companies

    Directory of Open Access Journals (Sweden)

    Lucie Vnoučková

    2016-01-01

    Full Text Available The necessity of identification of key talents in company is known in all sectors of economy. Therefore the aim of the paper is based on competency analysis to define key factors leading to talent identification and internalization through competency modelling. Paper characterizes areas of necessary competencies on specific job positions in companies. Their targeting on employee and teams in talent management is revealed. The objective is based on analysis of primary survey conducted on 101 agriculture companies. The data were obtained through manager surveys for which a single manager represented the given company. One-dimensional and multi-dimensional statistics were used to evaluate the data. Based on statistical analyses of required competencies five factors characterizing area of key employee and team development were identified. Those factors are inclusive approach, management support, strategic development, leadership development and integrity. The resultant factors create competency models usable in specified job positions. Limits of the paper is narrow focus on primary sector companies. The results may help surveyed companies in primary sector to set required and necessary competencies for specific areas to identify and develop employees, talents and teams.

  19. Coupling process-based models and plant architectural models: A key issue for simulating crop production

    NARCIS (Netherlands)

    Reffye, de P.; Heuvelink, E.; Guo, Y.; Hu, B.G.; Zhang, B.G.

    2009-01-01

    Process-Based Models (PBMs) can successfully predict the impact of environmental factors (temperature, light, CO2, water and nutrients) on crop growth and yield. These models are used widely for yield prediction and optimization of water and nutrient supplies. Nevertheless, PBMs do not consider

  20. [A model for evaluation of key measures for control of chikungunya fever outbreak in China].

    Science.gov (United States)

    Zhao, Jin; Liu, Ruchun; Chen, Shuilian; Chen, Tianmu

    2015-11-01

    To analyze the transmission pattern of Chikungunya (CHIK) fever in community and evaluate the effectiveness of mosquito control, case isolation and other key control measures by using ordinary differential equation (ODE) model. According to natural history of CHIK, an ODE model for the epidemiological analysis of CHIK outbreak was established. The key parameters of the model were obtained by fitting the model with reported outbreak data of the first CHIK outbreak in China. Then the outbreak characteristics without intervention, the effectiveness of mosquito control and case isolation were simulated. Without intervention, an imported case would cause an outbreak in a community with population of 11 000, and cumulative case number would exceed 941 when the total attack rate was 8.55%. The results of our simulation revealed that the effectiveness of case isolation was not perfect enough when it was implemented alone. Although the number of cases could be decreased by case isolation, the duration of outbreak would not be shortened. Differently, the effectiveness of mosquito control was remarkable. In addition, the earlier the measure was implemented, the better the effectiveness would be. The effectiveness of mosquito control plus case isolation was same with mosquito control. To control a CHIK outbreak, mosquito control is the most recommended measures. However, case isolation is also necessary as the supplementation of mosquito control.

  1. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    International Nuclear Information System (INIS)

    J. J. Tappen

    2003-01-01

    The purpose of this revision of ''Evaluation of the Applicability of Biosphere-Related Features, Events, and Processes (FEPs)'' (BSC 2001) is to document the screening analysis of biosphere-related primary FEPs, as identified in ''The Development of Information Catalogued in REV00 of the YMP FEP Database'' (Freeze et al. 2001), in accordance with the requirements of the final U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR Part 63. This database is referred to as the Yucca Mountain Project (YMP) FEP Database throughout this document. Those biosphere-related primary FEPs that are screened as applicable will be used to develop the conceptual model portion of the biosphere model, which will in turn be used to develop the mathematical model portion of the biosphere model. As part of this revision, any reference to the screening guidance or criteria provided either by Dyer (1999) or by the proposed NRC regulations at 64 FR 8640 has been removed. The title of this revision has been changed to more accurately reflect the purpose of the analyses. In addition, this revision will address Item Numbers 19, 20, 21, 25, and 26 from Attachment 2 of ''U.S. Nuclear Regulatory Commission/U.S. Department of Energy Technical Exchange and Management Meeting on Total System Performance Assessment and Integration (August 6 through 10, 2001)'' (Reamer 2001). This Scientific Analysis Report (SAR) does not support the current revision to the YMP FEP Database (Freeze et al. 2001). Subsequent to the release of the YMP FEP Database (Freeze et al. 2001), a series of reviews was conducted on both the FEP processes used to support Total System Performance Assessment for Site Recommendation and to develop the YMP FEP Database. In response to observations and comments from these reviews, particularly the NRC/DOE TSPA Technical Exchange in August 2001 (Reamer 2001), several Key Technical Issue (KTI) Agreements were developed. ''The Enhanced Plan for Features, Events and Processes

  2. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    J. J. Tappen

    2003-02-16

    The purpose of this revision of ''Evaluation of the Applicability of Biosphere-Related Features, Events, and Processes (FEPs)'' (BSC 2001) is to document the screening analysis of biosphere-related primary FEPs, as identified in ''The Development of Information Catalogued in REV00 of the YMP FEP Database'' (Freeze et al. 2001), in accordance with the requirements of the final U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR Part 63. This database is referred to as the Yucca Mountain Project (YMP) FEP Database throughout this document. Those biosphere-related primary FEPs that are screened as applicable will be used to develop the conceptual model portion of the biosphere model, which will in turn be used to develop the mathematical model portion of the biosphere model. As part of this revision, any reference to the screening guidance or criteria provided either by Dyer (1999) or by the proposed NRC regulations at 64 FR 8640 has been removed. The title of this revision has been changed to more accurately reflect the purpose of the analyses. In addition, this revision will address Item Numbers 19, 20, 21, 25, and 26 from Attachment 2 of ''U.S. Nuclear Regulatory Commission/U.S. Department of Energy Technical Exchange and Management Meeting on Total System Performance Assessment and Integration (August 6 through 10, 2001)'' (Reamer 2001). This Scientific Analysis Report (SAR) does not support the current revision to the YMP FEP Database (Freeze et al. 2001). Subsequent to the release of the YMP FEP Database (Freeze et al. 2001), a series of reviews was conducted on both the FEP processes used to support Total System Performance Assessment for Site Recommendation and to develop the YMP FEP Database. In response to observations and comments from these reviews, particularly the NRC/DOE TSPA Technical Exchange in August 2001 (Reamer 2001), several Key Technical Issue (KTI) Agreements were developed

  3. Modelling Feature Interaction Patterns in Nokia Mobile Phones using Coloured Petri Nets and Design/CPN

    DEFF Research Database (Denmark)

    Lorentsen, Louise; Tuovinen, Antti-Pekka; Xu, Jianli

    2002-01-01

    This paper describes the first results of a project on modelling of important feature interaction patterns of Nokia mobile phones using Coloured Petri Nets. A modern mobile phone supports many features: voice and data calls, text messaging, personal information management (phonebook and calendar......), WAP browsing, games, etc. All these features are packaged into a handset with a small screen and a special purpose keypad. The limited user interface and the seamless intertwining of logically separate features cause many problems in the software development of the user interface of mobile phones...

  4. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  5. Novel personalized pathway-based metabolomics models reveal key metabolic pathways for breast cancer diagnosis

    DEFF Research Database (Denmark)

    Huang, Sijia; Chong, Nicole; Lewis, Nathan

    2016-01-01

    diagnosis. We applied this method to predict breast cancer occurrence, in combination with correlation feature selection (CFS) and classification methods. Results: The resulting all-stage and early-stage diagnosis models are highly accurate in two sets of testing blood samples, with average AUCs (Area Under.......993. Moreover, important metabolic pathways, such as taurine and hypotaurine metabolism and the alanine, aspartate, and glutamate pathway, are revealed as critical biological pathways for early diagnosis of breast cancer. Conclusions: We have successfully developed a new type of pathway-based model to study...... metabolomics data for disease diagnosis. Applying this method to blood-based breast cancer metabolomics data, we have discovered crucial metabolic pathway signatures for breast cancer diagnosis, especially early diagnosis. Further, this modeling approach may be generalized to other omics data types for disease...

  6. Modeling Key Drivers of Cholera Transmission Dynamics Provides New Perspectives for Parasitology.

    Science.gov (United States)

    Rinaldo, Andrea; Bertuzzo, Enrico; Blokesch, Melanie; Mari, Lorenzo; Gatto, Marino

    2017-08-01

    Hydroclimatological and anthropogenic factors are key drivers of waterborne disease transmission. Information on human settlements and host mobility on waterways along which pathogens and hosts disperse, and relevant hydroclimatological processes, can be acquired remotely and included in spatially explicit mathematical models of disease transmission. In the case of epidemic cholera, such models allowed the description of complex disease patterns and provided insight into the course of ongoing epidemics. The inclusion of spatial information in models of disease transmission can aid in emergency management and the assessment of alternative interventions. Here, we review the study of drivers of transmission via spatially explicit approaches and argue that, because many parasitic waterborne diseases share the same drivers as cholera, similar principles may apply. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Directory of Open Access Journals (Sweden)

    Mou’ath Hourani

    2016-06-01

    Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models.

  8. Choosing preclinical study models of diabetic retinopathy: key problems for consideration

    Science.gov (United States)

    Mi, Xue-Song; Yuan, Ti-Fei; Ding, Yong; Zhong, Jing-Xiang; So, Kwok-Fai

    2014-01-01

    Diabetic retinopathy (DR) is the most common complication of diabetes mellitus in the eye. Although the clinical treatment for DR has already developed to a relative high level, there are still many urgent problems that need to be investigated in clinical and basic science. Currently, many in vivo animal models and in vitro culture systems have been applied to solve these problems. Many approaches have also been used to establish different DR models. However, till now, there has not been a single study model that can clearly and exactly mimic the developmental process of the human DR. Choosing the suitable model is important, not only for achieving our research goals smoothly, but also, to better match with different experimental proposals in the study. In this review, key problems for consideration in choosing study models of DR are discussed. These problems relate to clinical relevance, different approaches for establishing models, and choice of different species of animals as well as of the specific in vitro culture systems. Attending to these considerations will deepen the understanding on current study models and optimize the experimental design for the final goal of preventing DR. PMID:25429204

  9. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    Science.gov (United States)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  10. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  11. Implementing the Five-A Model of Technical Refinement: Key Roles of the Sport Psychologist.

    Science.gov (United States)

    Carson, Howie J; Collins, Dave

    2016-10-01

    There is increasing evidence for the significant contribution provided by sport psychologists within applied coaching environments. However, this rarely considers their skills/knowledge being applied when refining athletes' already learned and well-established motor skills. Therefore, this article focuses on how a sport psychologist might assist a coach and athlete to implement long-term permanent and pressure proof refinements. It highlights key contributions at each stage of the Five-A model-designed to deliver these important outcomes-providing both psychomotor and psychosocial input to the support delivery. By employing these recommendations, sport psychologists can make multiple positive contributions to completion of this challenging task.

  12. Backup key generation model for one-time password security protocol

    Science.gov (United States)

    Jeyanthi, N.; Kundu, Sourav

    2017-11-01

    The use of one-time password (OTP) has ushered new life into the existing authentication protocols used by the software industry. It introduced a second layer of security to the traditional username-password authentication, thus coining the term, two-factor authentication. One of the drawbacks of this protocol is the unreliability of the hardware token at the time of authentication. This paper proposes a simple backup key model that can be associated with the real world applications’user database, which would allow a user to circumvent the second authentication stage, in the event of unavailability of the hardware token.

  13. Key challenges and priorities for modelling European grasslands under climate change.

    Science.gov (United States)

    Kipling, Richard P; Virkajärvi, Perttu; Breitsameter, Laura; Curnel, Yannick; De Swaef, Tom; Gustavsson, Anne-Maj; Hennart, Sylvain; Höglind, Mats; Järvenranta, Kirsi; Minet, Julien; Nendel, Claas; Persson, Tomas; Picon-Cochard, Catherine; Rolinski, Susanne; Sandars, Daniel L; Scollan, Nigel D; Sebek, Leon; Seddaiu, Giovanna; Topp, Cairistiona F E; Twardy, Stanislaw; Van Middelkoop, Jantine; Wu, Lianhai; Bellocchi, Gianni

    2016-10-01

    Grassland-based ruminant production systems are integral to sustainable food production in Europe, converting plant materials indigestible to humans into nutritious food, while providing a range of environmental and cultural benefits. Climate change poses significant challenges for such systems, their productivity and the wider benefits they supply. In this context, grassland models have an important role in predicting and understanding the impacts of climate change on grassland systems, and assessing the efficacy of potential adaptation and mitigation strategies. In order to identify the key challenges for European grassland modelling under climate change, modellers and researchers from across Europe were consulted via workshop and questionnaire. Participants identified fifteen challenges and considered the current state of modelling and priorities for future research in relation to each. A review of literature was undertaken to corroborate and enrich the information provided during the horizon scanning activities. Challenges were in four categories relating to: 1) the direct and indirect effects of climate change on the sward 2) climate change effects on grassland systems outputs 3) mediation of climate change impacts by site, system and management and 4) cross-cutting methodological issues. While research priorities differed between challenges, an underlying theme was the need for accessible, shared inventories of models, approaches and data, as a resource for stakeholders and to stimulate new research. Developing grassland models to effectively support efforts to tackle climate change impacts, while increasing productivity and enhancing ecosystem services, will require engagement with stakeholders and policy-makers, as well as modellers and experimental researchers across many disciplines. The challenges and priorities identified are intended to be a resource 1) for grassland modellers and experimental researchers, to stimulate the development of new research

  14. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    Directory of Open Access Journals (Sweden)

    Sirinoot Boonsuk

    2014-01-01

    Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.

  15. An expression screen for aged-dependent microRNAs identifies miR-30a as a key regulator of aging features in human epidermis.

    Science.gov (United States)

    Muther, Charlotte; Jobeili, Lara; Garion, Maëlle; Heraud, Sandrine; Thepot, Amélie; Damour, Odile; Lamartine, Jérôme

    2017-11-19

    The mechanisms affecting epidermal homeostasis during aging remain poorly understood. To identify age-related microRNAs, a class of non-coding RNAs known to play a key role in the regulation of epidermal homeostasis, an exhaustive miRNA expression screen was performed in human keratinocytes from young or elderly subjects. Many microRNAs modulated by aging were identified, including miR-30a, in which both strands were overexpressed in aged cells and epidermal tissue. Stable MiR-30a over-expression strongly impaired epidermal differentiation, inducing severe barrier function defects in an organotypic culture model. A significant increase was also observed in the level of apoptotic cells in epidermis over-expressing miR-30a. Several gene targets of miR-30a were identified in keratinocytes, including LOX (encoding lysyl oxidase, a regulator of the proliferation/differentiation balance of keratinocytes), IDH1 (encoding isocitrate dehydrogenase, an enzyme of cellular metabolism) and AVEN (encoding a caspase inhibitor). Direct regulation of LOX , IDH1 and AVEN by miR-30a was confirmed in human keratinocytes. They were, moreover, observed to be repressed in aged skin, suggesting a possible link between miR-30a induction and skin-aging phenotype. This study revealed a new miRNA actor and deciphered new molecular mechanisms to explain certain alterations observed in epidermis during aging and especially those concerning keratinocyte differentiation and apoptosis.

  16. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  17. Development of generic key performance indicators for PMBOK® using a 3D project integration model

    Directory of Open Access Journals (Sweden)

    Craig Langston

    2013-12-01

    Full Text Available Since Martin Barnes’ so-called ‘iron triangle’ circa 1969, much debate has occurred over how best to describe the fundamental constraints that underpin project success. This paper develops a 3D project integration model for PMBOK® comprising core constraints of scope, cost, time and risk as a basis to propose six generic key performance indicators (KPIs that articulate successful project delivery. These KPIs are defined as value, efficiency, speed, innovation, complexity and impact and can each be measured objectively as ratios of the core constraints. An overall KPI (denoted as s3/ctr is also derived. The aim in this paper is to set out the case for such a model and to demonstrate how it can be employed to assess the performance of project teams in delivering successful outcomes at various stages in the project life cycle. As part of the model’s development, a new PMBOK® knowledge area concerning environmental management is advanced.

  18. Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling

    Directory of Open Access Journals (Sweden)

    Nelson Peter

    2006-11-01

    Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.

  19. Security of Device-Independent Quantum Key Distribution in the Bounded-Quantum-Storage Model

    Directory of Open Access Journals (Sweden)

    S. Pironio

    2013-08-01

    Full Text Available Device-independent quantum key distribution (DIQKD is a formalism that supersedes traditional quantum key distribution, as its security does not rely on any detailed modeling of the internal working of the devices. This strong form of security is only possible using devices producing correlations that violate a Bell inequality. Full security proofs of DIQKD have recently been reported, but they tolerate zero or small amounts of noise and are restricted to protocols based on specific Bell inequalities. Here, we provide a security proof of DIQKD that is both more efficient and noise resistant, and also more general, as it applies to protocols based on arbitrary Bell inequalities and can be adapted to cover supraquantum eavesdroppers limited by the no-signaling principle only. It is formulated, however, in the bounded-quantum-storage model, where an upper bound on the adversary’s quantum memory is a priori known. This condition is not a limitation at present, since the best existing quantum memories have very short coherence times.

  20. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  1. Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion

    Directory of Open Access Journals (Sweden)

    Lijun Zhang

    2018-02-01

    Full Text Available Aiming at the pitting fault of deep groove ball bearing during service, this paper uses the vibration signal of five different states of deep groove ball bearing and extracts the relevant features, then uses a neural network to model the degradation for identifying and classifying the fault type. By comparing the effects of training samples with different capacities through performance indexes such as the accuracy and convergence speed, it is proven that an increase in the sample size can improve the performance of the model. Based on the polynomial fitting principle and Pearson correlation coefficient, fusion features based on the skewness index are proposed, and the performance improvement of the model after incorporating the fusion features is also validated. A comparison of the performance of the support vector machine (SVM model and the neural network model on this dataset is given. The research shows that neural networks have more potential for complex and high-volume datasets.

  2. Novel personalized pathway-based metabolomics models reveal key metabolic pathways for breast cancer diagnosis.

    Science.gov (United States)

    Huang, Sijia; Chong, Nicole; Lewis, Nathan E; Jia, Wei; Xie, Guoxiang; Garmire, Lana X

    2016-03-31

    More accurate diagnostic methods are pressingly needed to diagnose breast cancer, the most common malignant cancer in women worldwide. Blood-based metabolomics is a promising diagnostic method for breast cancer. However, many metabolic biomarkers are difficult to replicate among studies. We propose that higher-order functional representation of metabolomics data, such as pathway-based metabolomic features, can be used as robust biomarkers for breast cancer. Towards this, we have developed a new computational method that uses personalized pathway dysregulation scores for disease diagnosis. We applied this method to predict breast cancer occurrence, in combination with correlation feature selection (CFS) and classification methods. The resulting all-stage and early-stage diagnosis models are highly accurate in two sets of testing blood samples, with average AUCs (Area Under the Curve, a receiver operating characteristic curve) of 0.968 and 0.934, sensitivities of 0.946 and 0.954, and specificities of 0.934 and 0.918. These two metabolomics-based pathway models are further validated by RNA-Seq-based TCGA (The Cancer Genome Atlas) breast cancer data, with AUCs of 0.995 and 0.993. Moreover, important metabolic pathways, such as taurine and hypotaurine metabolism and the alanine, aspartate, and glutamate pathway, are revealed as critical biological pathways for early diagnosis of breast cancer. We have successfully developed a new type of pathway-based model to study metabolomics data for disease diagnosis. Applying this method to blood-based breast cancer metabolomics data, we have discovered crucial metabolic pathway signatures for breast cancer diagnosis, especially early diagnosis. Further, this modeling approach may be generalized to other omics data types for disease diagnosis.

  3. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  4. Use the predictive models to explore the key factors affecting phytoplankton succession in Lake Erhai, China.

    Science.gov (United States)

    Zhu, Rong; Wang, Huan; Chen, Jun; Shen, Hong; Deng, Xuwei

    2018-01-01

    Increasing algae in Lake Erhai has resulted in frequent blooms that have not only led to water ecosystem degeneration but also seriously influenced the quality of the water supply and caused extensive damage to the local people, as the lake is a water resource for Dali City. Exploring the key factors affecting phytoplankton succession and developing predictive models with easily detectable parameters for phytoplankton have been proven to be practical ways to improve water quality. To this end, a systematic survey focused on phytoplankton succession was conducted over 2 years in Lake Erhai. The data from the first study year were used to develop predictive models, and the data from the second year were used for model verification. The seasonal succession of phytoplankton in Lake Erhai was obvious. The dominant groups were Cyanobacteria in the summer, Chlorophyta in the autumn and Bacillariophyta in the winter. The developments and verification of predictive models indicated that compared to phytoplankton biomass, phytoplankton density is more effective for estimating phytoplankton variation in Lake Erhai. CCA (canonical correlation analysis) indicated that TN (total nitrogen), TP (total phosphorus), DO (dissolved oxygen), SD (Secchi depth), Cond (conductivity), T (water temperature), and ORP (oxidation reduction potential) had significant influences (p < 0.05) on the phytoplankton community. The CCA of the dominant species found that Microcystis was significantly influenced by T. The dominant Chlorophyta, Psephonema aenigmaticum and Mougeotia, were significantly influenced by TN. All results indicated that TN and T were the two key factors driving phytoplankton succession in Lake Erhai.

  5. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    The purpose of this report is to document the evaluation of biosphere features, events, and processes (FEPs) that relate to the license application (LA) process as required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The evaluation determines whether specific biosphere-related FEPs should be included or excluded from consideration in the Total System Performance Assessment (TSPA). This analysis documents the technical basis for screening decisions as required at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. For FEPs that are included in the TSPA, this analysis provides a TSPA disposition, which summarizes how the FEP has been included and addressed in the TSPA model, and cites the analysis reports and model reports that provide the technical basis and description of its disposition. For FEPs that are excluded from the TSPA, this analysis report provides a screening argument, which identifies the basis for the screening decision (i.e., low probability, low consequence, or by regulation) and discusses the technical basis that supports that decision. In cases, where a FEP covers multiple technical areas and is shared with other FEP analysis reports, this analysis may provide only a partial technical basis for the screening of the FEP. The full technical basis for these shared FEPs is addressed collectively by all FEP analysis reports that cover technical disciplines sharing a FEP. FEPs must be included in the TSPA unless they can be excluded by low probability, low consequence, or regulation. A FEP can be excluded from the TSPA by low probability per 10 CFR 63.114(d) [DIRS 156605], by showing that it has less than one chance in 10,000 of occurring over 10,000 years (or an approximately equivalent annualized probability of 10 -8 ). A FEP can be excluded from the TSPA by low consequence per 10 CFR 63.114 (e or f) [DIRS 156605], by showing that omitting the FEP would not significantly change the magnitude and

  6. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-09

    The purpose of this report is to document the evaluation of biosphere features, events, and processes (FEPs) that relate to the license application (LA) process as required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The evaluation determines whether specific biosphere-related FEPs should be included or excluded from consideration in the Total System Performance Assessment (TSPA). This analysis documents the technical basis for screening decisions as required at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. For FEPs that are included in the TSPA, this analysis provides a TSPA disposition, which summarizes how the FEP has been included and addressed in the TSPA model, and cites the analysis reports and model reports that provide the technical basis and description of its disposition. For FEPs that are excluded from the TSPA, this analysis report provides a screening argument, which identifies the basis for the screening decision (i.e., low probability, low consequence, or by regulation) and discusses the technical basis that supports that decision. In cases, where a FEP covers multiple technical areas and is shared with other FEP analysis reports, this analysis may provide only a partial technical basis for the screening of the FEP. The full technical basis for these shared FEPs is addressed collectively by all FEP analysis reports that cover technical disciplines sharing a FEP. FEPs must be included in the TSPA unless they can be excluded by low probability, low consequence, or regulation. A FEP can be excluded from the TSPA by low probability per 10 CFR 63.114(d) [DIRS 156605], by showing that it has less than one chance in 10,000 of occurring over 10,000 years (or an approximately equivalent annualized probability of 10{sup -8}). A FEP can be excluded from the TSPA by low consequence per 10 CFR 63.114 (e or f) [DIRS 156605], by showing that omitting the FEP would not significantly change the magnitude and

  7. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  8. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach

    Science.gov (United States)

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research. PMID:27706185

  9. Choosing preclinical study models of diabetic retinopathy: key problems for consideration

    Directory of Open Access Journals (Sweden)

    Mi XS

    2014-11-01

    Full Text Available Xue-Song Mi,1,2 Ti-Fei Yuan,3,4 Yong Ding,1 Jing-Xiang Zhong,1 Kwok-Fai So4,5 1Department of Ophthalmology, First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, People’s Republic of China; 2Department of Anatomy, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, People’s Republic of China; 3School of Psychology, Nanjing Normal University, Nanjing, People’s Republic of China; 4Department of Ophthalmology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 5Guangdong-Hongkong-Macau Institute of Central Nervous System, Jinan University, Guangzhou, People’s Republic of China Abstract: Diabetic retinopathy (DR is the most common complication of diabetes mellitus in the eye. Although the clinical treatment for DR has already developed to a relative high level, there are still many urgent problems that need to be investigated in clinical and basic science. Currently, many in vivo animal models and in vitro culture systems have been applied to solve these problems. Many approaches have also been used to establish different DR models. However, till now, there has not been a single study model that can clearly and exactly mimic the developmental process of the human DR. Choosing the suitable model is important, not only for achieving our research goals smoothly, but also, to better match with different experimental proposals in the study. In this review, key problems for consideration in choosing study models of DR are discussed. These problems relate to clinical relevance, different approaches for establishing models, and choice of different species of animals as well as of the specific in vitro culture systems. Attending to these considerations will deepen the understanding on current study models and optimize the experimental design for the final goal of preventing DR. Keywords: animal model, in vitro culture, ex vivo culture, neurovascular dysfunction

  10. The giant Jiaodong gold province: The key to a unified model for orogenic gold deposits?

    Directory of Open Access Journals (Sweden)

    David I. Groves

    2016-05-01

    Full Text Available Although the term orogenic gold deposit has been widely accepted for all gold-only lode-gold deposits, with the exception of Carlin-type deposits and rare intrusion-related gold systems, there has been continuing debate on their genesis. Early syngenetic models and hydrothermal models dominated by meteoric fluids are now clearly unacceptable. Magmatic-hydrothermal models fail to explain the genesis of orogenic gold deposits because of the lack of consistent spatially – associated granitic intrusions and inconsistent temporal relationships. The most plausible, and widely accepted, models involve metamorphic fluids, but the source of these fluids is hotly debated. Sources within deeper segments of the supracrustal successions hosting the deposits, the underlying continental crust, and subducted oceanic lithosphere and its overlying sediment wedge all have their proponents. The orogenic gold deposits of the giant Jiaodong gold province of China, in the delaminated North China Craton, contain ca. 120 Ma gold deposits in Precambrian crust that was metamorphosed over 2000 million years prior to gold mineralization. The only realistic source of fluid and gold is a subducted oceanic slab with its overlying sulfide-rich sedimentary package, or the associated mantle wedge. This could be viewed as an exception to a general metamorphic model where orogenic gold has been derived during greenschist- to amphibolite-facies metamorphism of supracrustal rocks: basaltic rocks in the Precambrian and sedimentary rocks in the Phanerozoic. Alternatively, if a holistic view is taken, Jiaodong can be considered the key orogenic gold province for a unified model in which gold is derived from late-orogenic metamorphic devolatilization of stalled subduction slabs and oceanic sediments throughout Earth history. The latter model satisfies all geological, geochronological, isotopic and geochemical constraints but the precise mechanisms of auriferous fluid release, like many

  11. Feature Extraction

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  12. Optimization of an individual re-identification modeling process using biometric features

    Energy Technology Data Exchange (ETDEWEB)

    Heredia-Langner, Alejandro; Amidan, Brett G.; Matzner, Shari; Jarman, Kristin H.

    2014-09-24

    We present results from the optimization of a re-identification process using two sets of biometric data obtained from the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database. The datasets contain real measurements of features for 2378 individuals in a standing (43 features) and seated (16 features) position. A genetic algorithm (GA) was used to search a large combinatorial space where different features are available between the probe (seated) and gallery (standing) datasets. Results show that optimized model predictions obtained using less than half of the 43 gallery features and data from roughly 16% of the individuals available produce better re-identification rates than two other approaches that use all the information available.

  13. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    Directory of Open Access Journals (Sweden)

    Masoud Ghodrati

    Full Text Available Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  14. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  15. An Investigation of Feature Models for Music Genre Classification using the Support Vector Classifier

    DEFF Research Database (Denmark)

    Meng, Anders; Shawe-Taylor, John

    2005-01-01

    In music genre classification the decision time is typically of the order of several seconds however most automatic music genre classification systems focus on short time features derived from 10-50ms. This work investigates two models, the multivariate gaussian model and the multivariate...... probability kernel. In order to examine the different methods an 11 genre music setup was utilized. In this setup the Mel Frequency Cepstral Coefficients (MFCC) were used as short time features. The accuracy of the best performing model on this data set was 44% as compared to a human performance of 52...

  16. Sensitivity analysis of key components in large-scale hydroeconomic models

    Science.gov (United States)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  17. Interpretive Structural Model of Key Performance Indicators for Sustainable Maintenance Evaluatian in Rubber Industry

    Science.gov (United States)

    Amrina, E.; Yulianto, A.

    2018-03-01

    Sustainable maintenance is a new challenge for manufacturing companies to realize sustainable development. In this paper, an interpretive structural model is developed to evaluate sustainable maintenance in the rubber industry. The initial key performance indicators (KPIs) is identified and derived from literature and then validated by academic and industry experts. As a result, three factors of economic, social, and environmental dividing into a total of thirteen indicators are proposed as the KPIs for sustainable maintenance evaluation in rubber industry. Interpretive structural modeling (ISM) methodology is applied to develop a network structure model of the KPIs consisting of three levels. The results show the economic factor is regarded as the basic factor, the social factor as the intermediate factor, while the environmental factor indicated to be the leading factor. Two indicators of social factor i.e. labor relationship, and training and education have both high driver and dependence power, thus categorized as the unstable indicators which need further attention. All the indicators of environmental factor and one indicator of social factor are indicated as the most influencing indicator. The interpretive structural model hoped can aid the rubber companies in evaluating sustainable maintenance performance.

  18. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  19. Re-orienting a remote acute care model towards a primary health care approach: key enablers.

    Science.gov (United States)

    Carroll, Vicki; Reeve, Carole A; Humphreys, John S; Wakerman, John; Carter, Maureen

    2015-01-01

    The objective of this study was to identify the key enablers of change in re-orienting a remote acute care model to comprehensive primary healthcare delivery. The setting of the study was a 12-bed hospital in Fitzroy Crossing, Western Australia. Individual key informant, in-depth interviews were completed with five of six identified senior leaders involved in the development of the Fitzroy Valley Health Partnership. Interviews were recorded and transcripts were thematically analysed by two investigators for shared views about the enabling factors strengthening primary healthcare delivery in a remote region of Australia. Participants described theestablishment of a culturally relevant primary healthcare service, using a community-driven, 'bottom up' approach characterised by extensive community participation. The formal partnership across the government and community controlled health services was essential, both to enable change to occur and to provide sustainability in the longer term. A hierarchy of major themes emerged. These included community participation, community readiness and desire for self-determination; linkages in the form of a government community controlled health service partnership; leadership; adequate infrastructure; enhanced workforce supply; supportive policy; and primary healthcare funding. The strong united leadership shown by the community and the health service enabled barriers to be overcome and it maximised the opportunities provided by government policy changes. The concurrent alignment around a common vision enabled implementation of change. The key principle learnt from this study is the importance of community and health service relationships and local leadership around a shared vision for the re-orientation of community health services.

  20. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  1. Identifying Key Features, Cutting Edge Cloud Resources, and Artificial Intelligence Tools to Achieve User-Friendly Water Science in the Cloud

    Science.gov (United States)

    Pierce, S. A.

    2017-12-01

    Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case

  2. Towards semantically sensitive text clustering: a feature space modeling technology based on dimension extension.

    Science.gov (United States)

    Liu, Yuanchao; Liu, Ming; Wang, Xin

    2015-01-01

    The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach.

  3. Towards semantically sensitive text clustering: a feature space modeling technology based on dimension extension.

    Directory of Open Access Journals (Sweden)

    Yuanchao Liu

    Full Text Available The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach.

  4. Systems analysis of eleven rodent disease models reveals an inflammatome signature and key drivers.

    Science.gov (United States)

    Wang, I-Ming; Zhang, Bin; Yang, Xia; Zhu, Jun; Stepaniants, Serguei; Zhang, Chunsheng; Meng, Qingying; Peters, Mette; He, Yudong; Ni, Chester; Slipetz, Deborah; Crackower, Michael A; Houshyar, Hani; Tan, Christopher M; Asante-Appiah, Ernest; O'Neill, Gary; Luo, Mingjuan Jane; Thieringer, Rolf; Yuan, Jeffrey; Chiu, Chi-Sung; Lum, Pek Yee; Lamb, John; Boie, Yves; Wilkinson, Hilary A; Schadt, Eric E; Dai, Hongyue; Roberts, Christopher

    2012-07-17

    Common inflammatome gene signatures as well as disease-specific signatures were identified by analyzing 12 expression profiling data sets derived from 9 different tissues isolated from 11 rodent inflammatory disease models. The inflammatome signature significantly overlaps with known drug targets and co-expressed gene modules linked to metabolic disorders and cancer. A large proportion of genes in this signature are tightly connected in tissue-specific Bayesian networks (BNs) built from multiple independent mouse and human cohorts. Both the inflammatome signature and the corresponding consensus BNs are highly enriched for immune response-related genes supported as causal for adiposity, adipokine, diabetes, aortic lesion, bone, muscle, and cholesterol traits, suggesting the causal nature of the inflammatome for a variety of diseases. Integration of this inflammatome signature with the BNs uncovered 151 key drivers that appeared to be more biologically important than the non-drivers in terms of their impact on disease phenotypes. The identification of this inflammatome signature, its network architecture, and key drivers not only highlights the shared etiology but also pinpoints potential targets for intervention of various common diseases.

  5. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    Science.gov (United States)

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  6. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Directory of Open Access Journals (Sweden)

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  7. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  8. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  9. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    Science.gov (United States)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  10. Antimicrobial Nanoplexes meet Model Bacterial Membranes: the key role of Cardiolipin

    Science.gov (United States)

    Marín-Menéndez, Alejandro; Montis, Costanza; Díaz-Calvo, Teresa; Carta, Davide; Hatzixanthis, Kostas; Morris, Christopher J.; McArthur, Michael; Berti, Debora

    2017-01-01

    Antimicrobial resistance to traditional antibiotics is a crucial challenge of medical research. Oligonucleotide therapeutics, such as antisense or Transcription Factor Decoys (TFDs), have the potential to circumvent current resistance mechanisms by acting on novel targets. However, their full translation into clinical application requires efficient delivery strategies and fundamental comprehension of their interaction with target bacterial cells. To address these points, we employed a novel cationic bolaamphiphile that binds TFDs with high affinity to form self-assembled complexes (nanoplexes). Confocal microscopy revealed that nanoplexes efficiently transfect bacterial cells, consistently with biological efficacy on animal models. To understand the factors affecting the delivery process, liposomes with varying compositions, taken as model synthetic bilayers, were challenged with nanoplexes and investigated with Scattering and Fluorescence techniques. Thanks to the combination of results on bacteria and synthetic membrane models we demonstrate for the first time that the prokaryotic-enriched anionic lipid Cardiolipin (CL) plays a key-role in the TFDs delivery to bacteria. Moreover, we can hypothesize an overall TFD delivery mechanism, where bacterial membrane reorganization with permeability increase and release of the TFD from the nanoplexes are the main factors. These results will be of great benefit to boost the development of oligonucleotides-based antimicrobials of superior efficacy.

  11. Transition and the community college: a Career Keys model for students with disabilities.

    Science.gov (United States)

    Roessler, Richard T.; Brown, Patricia L.

    2000-01-01

    Transition models are needed that address multiple phases in the postsecondary education of students with disabilities. These models must first address the recruitment of high school students with disabilities for community colleges through career exploration experiences that help students clarify their educational and vocational interests and relate those interests to a two-year postsecondary program. Students with disabilities then need a comprehensive service program while attending community college to help them identify accommodation needs in classroom and workplace environments and develop the skills to request such accommodations from their instructors and employers. With this skill base, they are well prepared to initiate the next transition in their lives, that is, the movement from the community college to a four-year educational institution or to employment. Programs are needed to facilitate this transition, such as a placement planning seminar involving rehabilitation professionals and employers and an accommodation follow-up assessment with students in their new educational and employment settings. The "Career Keys" model describes how to deliver the services needed in each of these critical transition phases.

  12. Feature selection, statistical modeling and its applications to universal JPEG steganalyzer

    Energy Technology Data Exchange (ETDEWEB)

    Jalan, Jaikishan [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Steganalysis deals with identifying the instances of medium(s) which carry a message for communication by concealing their exisitence. This research focuses on steganalysis of JPEG images, because of its ubiquitous nature and low bandwidth requirement for storage and transmission. JPEG image steganalysis is generally addressed by representing an image with lower-dimensional features such as statistical properties, and then training a classifier on the feature set to differentiate between an innocent and stego image. Our approach is two fold: first, we propose a new feature reduction technique by applying Mahalanobis distance to rank the features for steganalysis. Many successful steganalysis algorithms use a large number of features relative to the size of the training set and suffer from a ”curse of dimensionality”: large number of feature values relative to training data size. We apply this technique to state-of-the-art steganalyzer proposed by Tom´as Pevn´y (54) to understand the feature space complexity and effectiveness of features for steganalysis. We show that using our approach, reduced-feature steganalyzers can be obtained that perform as well as the original steganalyzer. Based on our experimental observation, we then propose a new modeling technique for steganalysis by developing a Partially Ordered Markov Model (POMM) (23) to JPEG images and use its properties to train a Support Vector Machine. POMM generalizes the concept of local neighborhood directionality by using a partial order underlying the pixel locations. We show that the proposed steganalyzer outperforms a state-of-the-art steganalyzer by testing our approach with many different image databases, having a total of 20000 images. Finally, we provide a software package with a Graphical User Interface that has been developed to make this research accessible to local state forensic departments.

  13. Feature Compensation Employing Multiple Environmental Models for Robust In-Vehicle Speech Recognition

    Science.gov (United States)

    Kim, Wooil; Hansen, John H. L.

    An effective feature compensation method is developed for reliable speech recognition in real-life in-vehicle environments. The CU-Move corpus, used for evaluation, contains a range of speech and noise signals collected for a number of speakers under actual driving conditions. PCGMM-based feature compensation, considered in this paper, utilizes parallel model combination to generate noise-corrupted speech model by combining clean speech and the noise model. In order to address unknown time-varying background noise, an interpolation method of multiple environmental models is employed. To alleviate computational expenses due to multiple models, an Environment Transition Model is employed, which is motivated from Noise Language Model used in Environmental Sniffing. An environment dependent scheme of mixture sharing technique is proposed and shown to be more effective in reducing the computational complexity. A smaller environmental model set is determined by the environment transition model for mixture sharing. The proposed scheme is evaluated on the connected single digits portion of the CU-Move database using the Aurora2 evaluation toolkit. Experimental results indicate that our feature compensation method is effective for improving speech recognition in real-life in-vehicle conditions. A reduction of 73.10% of the computational requirements was obtained by employing the environment dependent mixture sharing scheme with only a slight change in recognition performance. This demonstrates that the proposed method is effective in maintaining the distinctive characteristics among the different environmental models, even when selecting a large number of Gaussian components for mixture sharing.

  14. Cadmium-induced immune abnormality is a key pathogenic event in human and rat models of preeclampsia.

    Science.gov (United States)

    Zhang, Qiong; Huang, Yinping; Zhang, Keke; Huang, Yanjun; Yan, Yan; Wang, Fan; Wu, Jie; Wang, Xiao; Xu, Zhangye; Chen, Yongtao; Cheng, Xue; Li, Yong; Jiao, Jinyu; Ye, Duyun

    2016-11-01

    With increased industrial development, cadmium is an increasingly important environmental pollutant. Studies have identified various adverse effects of cadmium on human beings. However, the relationships between cadmium pollution and the pathogenesis of preeclampsia remain elusive. The objective of this study is to explore the effects of cadmium on immune system among preeclamptic patients and rats. The results showed that the cadmium levels in the peripheral blood of preeclamptic patients were significantly higher than those observed in normal pregnancy. Based on it, a novel rat model of preeclampsia was established by the intraperitoneal administration of cadmium chloride (CdCl2) (0.125 mg of Cd/kg body weight) on gestational days 9-14. Key features of preeclampsia, including hypertension, proteinuria, placental abnormalities and small foetal size, appeared in pregnant rats after the administration of low-dose of CdCl2. Cadmium increased immunoglobulin production, mainly angiotensin II type 1-receptor-agonistic autoantibodies (AT1-AA), by increasing the expression of activation-induced cytosine deaminase (AID) in B cells. AID is critical for the maturation of antibody and autoantibody responses. In addition, angiotensin II type 1-receptor-agonistic autoantibody, which emerged recently as a potential pathogenic contributor to PE, was responsible for the deposition of complement component 5 (C5) in kidneys of pregnant rats via angiotensin II type 1 receptor (AT1R) activation. C5a is a fragment of C5 that is released during C5 activation. Selectively interfering with C5a signalling by a complement C5a receptor-specific antagonist significantly attenuated hypertension and proteinuria in Cd-injected pregnant rats. Our results suggest that cadmium induces immune abnormalities that may be a key pathogenic contributor to preeclampsia and provide new insights into treatment strategies of preeclampsia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  16. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  17. A parametric texture model based on deep convolutional features closely matches texture appearance for humans.

    Science.gov (United States)

    Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias

    2017-10-01

    Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

  18. Optimization of a 3D Dynamic Culturing System for In Vitro Modeling of Frontotemporal Neurodegeneration-Relevant Pathologic Features.

    Science.gov (United States)

    Tunesi, Marta; Fusco, Federica; Fiordaliso, Fabio; Corbelli, Alessandro; Biella, Gloria; Raimondi, Manuela T

    2016-01-01

    Frontotemporal lobar degeneration (FTLD) is a severe neurodegenerative disorder that is diagnosed with increasing frequency in clinical setting. Currently, no therapy is available and in addition the molecular basis of the disease are far from being elucidated. Consequently, it is of pivotal importance to develop reliable and cost-effective in vitro models for basic research purposes and drug screening. To this respect, recent results in the field of Alzheimer's disease have suggested that a tridimensional (3D) environment is an added value to better model key pathologic features of the disease. Here, we have tried to add complexity to the 3D cell culturing concept by using a microfluidic bioreactor, where cells are cultured under a continuous flow of medium, thus mimicking the interstitial fluid movement that actually perfuses the body tissues, including the brain. We have implemented this model using a neuronal-like cell line (SH-SY5Y), a widely exploited cell model for neurodegenerative disorders that shows some basic features relevant for FTLD modeling, such as the release of the FTLD-related protein progranulin (PRGN) in specific vesicles (exosomes). We have efficiently seeded the cells on 3D scaffolds, optimized a disease-relevant oxidative stress experiment (by targeting mitochondrial function that is one of the possible FTLD-involved pathological mechanisms) and evaluated cell metabolic activity in dynamic culture in comparison to static conditions, finding that SH-SY5Y cells cultured in 3D scaffold are susceptible to the oxidative damage triggered by a mitochondrial-targeting toxin (6-OHDA) and that the same cells cultured in dynamic conditions kept their basic capacity to secrete PRGN in exosomes once recovered from the bioreactor and plated in standard 2D conditions. We think that a further improvement of our microfluidic system may help in providing a full device where assessing basic FTLD-related features (including PRGN dynamic secretion) that may be

  19. Test and lower bound modeling of keyed shear connections in RC shear walls

    DEFF Research Database (Denmark)

    Sørensen, Jesper Harrild; Herfelt, Morten Andersen; Hoang, Linh Cao

    2018-01-01

    This paper presents an investigation into the ultimate behavior of a recently developed design for keyed shear connections. The influence of the key depth on the failure mode and ductility of the connection has been studied by push-off tests. The tests showed that connections with larger key inde...

  20. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    the community. In this article, we present an empirical study on the relevant features produced by two recently developed discriminative learning algorithms: neighborhood approximation forests (NAF) and the relevance voxel machine (RVoxM). Specifically, we examine whether the sets of features these methods......Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from...... produce are exhaustive; that is whether the features that are not marked as relevant carry disease-related information. We perform experiments on three different problems: image-based regression on a synthetic dataset for which the set of relevant features is known, regression of subject age as well...

  1. Deep Learning Based Regression and Multiclass Models for Acute Oral Toxicity Prediction with Automatic Chemical Feature Extraction.

    Science.gov (United States)

    Xu, Youjun; Pei, Jianfeng; Lai, Luhua

    2017-11-27

    Median lethal death, LD 50 , is a general indicator of compound acute oral toxicity (AOT). Various in silico methods were developed for AOT prediction to reduce costs and time. In this study, we developed an improved molecular graph encoding convolutional neural networks (MGE-CNN) architecture to construct three types of high-quality AOT models: regression model (deepAOT-R), multiclassification model (deepAOT-C), and multitask model (deepAOT-CR). These predictive models highly outperformed previously reported models. For the two external data sets containing 1673 (test set I) and 375 (test set II) compounds, the R 2 and mean absolute errors (MAEs) of deepAOT-R on the test set I were 0.864 and 0.195, and the prediction accuracies of deepAOT-C were 95.5% and 96.3% on test sets I and II, respectively. The two external prediction accuracies of deepAOT-CR are 95.0% and 94.1%, while the R 2 and MAE are 0.861 and 0.204 for test set I, respectively. We then performed forward and backward exploration of deepAOT models for deep fingerprints, which could support shallow machine learning methods more efficiently than traditional fingerprints or descriptors. We further performed automatic feature learning, a key essence of deep learning, to map the corresponding activation values into fragment space and derive AOT-related chemical substructures by reverse mining of the features. Our deep learning architecture for AOT is generally applicable in predicting and exploring other toxicity or property end points of chemical compounds. The two deepAOT models are freely available at http://repharma.pku.edu.cn/DLAOT/DLAOThome.php or http://www.pkumdl.cn/DLAOT/DLAOThome.php .

  2. A new discrete dynamic model of ABA-induced stomatal closure predicts key feedback loops.

    Directory of Open Access Journals (Sweden)

    Réka Albert

    2017-09-01

    Full Text Available Stomata, microscopic pores in leaf surfaces through which water loss and carbon dioxide uptake occur, are closed in response to drought by the phytohormone abscisic acid (ABA. This process is vital for drought tolerance and has been the topic of extensive experimental investigation in the last decades. Although a core signaling chain has been elucidated consisting of ABA binding to receptors, which alleviates negative regulation by protein phosphatases 2C (PP2Cs of the protein kinase OPEN STOMATA 1 (OST1 and ultimately results in activation of anion channels, osmotic water loss, and stomatal closure, over 70 additional components have been identified, yet their relationships with each other and the core components are poorly elucidated. We integrated and processed hundreds of disparate observations regarding ABA signal transduction responses underlying stomatal closure into a network of 84 nodes and 156 edges and, as a result, established those relationships, including identification of a 36-node, strongly connected (feedback-rich component as well as its in- and out-components. The network's domination by a feedback-rich component may reflect a general feature of rapid signaling events. We developed a discrete dynamic model of this network and elucidated the effects of ABA plus knockout or constitutive activity of 79 nodes on both the outcome of the system (closure and the status of all internal nodes. The model, with more than 1024 system states, is far from fully determined by the available data, yet model results agree with existing experiments in 82 cases and disagree in only 17 cases, a validation rate of 75%. Our results reveal nodes that could be engineered to impact stomatal closure in a controlled fashion and also provide over 140 novel predictions for which experimental data are currently lacking. Noting the paucity of wet-bench data regarding combinatorial effects of ABA and internal node activation, we experimentally confirmed

  3. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

  4. The Importance of Representing Certain Key Vegetation Canopy Processes Explicitly in a Land Surface Model

    Science.gov (United States)

    Napoly, A.; Boone, A. A.; Martin, E.; Samuelsson, P.

    2015-12-01

    Land surface models are moving to more detailed vegetation canopy descriptions in order to better represent certain key processes, such as Carbon dynamics and snowpack evolution. Since such models are usually applied within coupled numerical weather prediction or spatially distributed hydrological models, these improvements must strike a balance between computational cost and complexity. The consequences of simplified or composite canopy approaches can be manifested in terms of increased errors with respect to soil temperatures, estimates of the diurnal cycle of the turbulent fluxes or snow canopy interception and melt. Vegetated areas and particularly forests are modeled in a quite simplified manner in the ISBA land surface model. However, continuous developments of surface processes now require a more accurate description of the canopy. A new version of the the model now includes a multi energy balance (MEB) option to explicitly represent the canopy and the forest floor. It will be shown that certain newly included processes such as the shading effect of the vegetation, the explicit heat capacity of the canopy, and the insulating effect of the forest floor turn out to be essential. A detailed study has been done for four French forested sites. It was found that the MEB option significantly improves the ground heat flux (RMSE decrease from 50W/m2 to 10W/m2 on average) and soil temperatures when compared against measurements. Also the sensible heat flux calculation was improved primarily owing to a better phasing with the solar insulation owing to a lower vegetation heat capacity. However, the total latent heat flux is less modified compared to the classical ISBA simulation since it is more related to water uptake and the formulation of the stomatal resistance (which are unchanged). Next, a benchmark over 40 Fluxnet sites (116 cumulated years) was performed and compared with results from the default composite soil-vegetation version of ISBA. The results show

  5. Feature selection approaches for predictive modelling of groundwater nitrate pollution: An evaluation of filters, embedded and wrapper methods.

    Science.gov (United States)

    Rodriguez-Galiano, V F; Luque-Espinar, J A; Chica-Olmo, M; Mendes, M P

    2018-05-15

    Recognising the various sources of nitrate pollution and understanding system dynamics are fundamental to tackle groundwater quality problems. A comprehensive GIS database of twenty parameters regarding hydrogeological and hydrological features and driving forces were used as inputs for predictive models of nitrate pollution. Additionally, key variables extracted from remotely sensed Normalised Difference Vegetation Index time-series (NDVI) were included in database to provide indications of agroecosystem dynamics. Many approaches can be used to evaluate feature importance related to groundwater pollution caused by nitrates. Filters, wrappers and embedded methods are used to rank feature importance according to the probability of occurrence of nitrates above a threshold value in groundwater. Machine learning algorithms (MLA) such as Classification and Regression Trees (CART), Random Forest (RF) and Support Vector Machines (SVM) are used as wrappers considering four different sequential search approaches: the sequential backward selection (SBS), the sequential forward selection (SFS), the sequential forward floating selection (SFFS) and sequential backward floating selection (SBFS). Feature importance obtained from RF and CART was used as an embedded approach. RF with SFFS had the best performance (mmce=0.12 and AUC=0.92) and good interpretability, where three features related to groundwater polluted areas were selected: i) industries and facilities rating according to their production capacity and total nitrogen emissions to water within a 3km buffer, ii) livestock farms rating by manure production within a 5km buffer and, iii) cumulated NDVI for the post-maximum month, being used as a proxy of vegetation productivity and crop yield. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A Model for the Growth of Localized Shell Features in Inertial Confinement Fusion Implosions

    Science.gov (United States)

    Goncharov, V. N.

    2017-10-01

    Engineering features and target debris on inertial confinement fusion capsules play detrimental role in target performance. The contact points of such features with target surface as well as shadowing effects produce localized shell nonuniformities that grow in time because of the Rayleigh-Taylor instability developed during shell acceleration. Such growth leads to significant mass modulation in the shell and injection of ablator and cold fuel material into the target vapor region. These effects are commonly modeled using 2-D and 3-D hydrodynamic codes that take into account multiple physics effects. Such simulations, however, are very challenging since in many cases they are inherently three dimensional (as in the case of fill tube or stalk shadowing) and require very high grid resolution to accurately model short-scale features. To gain physics insight, an analytic model describing the growth of these features has been developed. The model is based on the Layzer-type approach. The talk will discuss the results of the model used to study perturbation growth seeded by localized target debris, glue spots, fill tubes, and stalks. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  7. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes

  8. Key parameters of the sediment surface morphodynamics in an estuary - An assessment of model solutions

    Science.gov (United States)

    Sampath, D. M. R.; Boski, T.

    2018-05-01

    Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost

  9. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    on performance. An iterative variant of the method is also described which combines screening with penalized regression to handle more complex feature covariance structures. The methodology is evaluated through simulation studies and through application to a real gene expression data set.......In data sets with many more features than observations, independent screening based on all univariate regression models leads to a computationally convenient variable selection method. Recent efforts have shown that, in the case of generalized linear models, independent screening may suffice...... to capture all relevant features with high probability, even in ultrahigh dimension. It is unclear whether this formal sure screening property is attainable when the response is a right-censored survival time. We propose a computationally very efficient independent screening method for survival data which...

  10. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  11. Finite element modeling of small-scale tapered wood-laminated composite poles with biomimicry features

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; R.C. Tang; Chung Y. Hse

    2008-01-01

    Tapered composite poles with biomimicry features as in bamboo are a new generation of wood laminated composite poles that may some day be considered as an alternative to solid wood poles that are widely used in the transmission and telecommunication fields. Five finite element models were developed with ANSYS to predict and assess the performance of five types of...

  12. FEATURES OF THE SUBSONIC WIND TUNNEL EXPERIMENT WITH ROTATING MODELS OF AIRCRAFT

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available Paper contains description of the construction of aircraft model for aerodynamic experiment in subsonic wind-tunnel, during it rotation. The technique of the experiment, and features associated with rotation were mentioned. Provided the results of measuring Magnus force and it comparing with analytical methodology.

  13. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    Science.gov (United States)

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  14. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    Science.gov (United States)

    Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.

    2015-04-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.

  15. Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data

    Science.gov (United States)

    Abdolghafoorian, A.; Farhadi, L.

    2017-12-01

    Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and

  16. Comparison of prediction-based fusion and feature-level fusion across different learning models

    NARCIS (Netherlands)

    Petridis, Stavros; Bilakhia, Sanjay; Pantic, Maja

    2012-01-01

    There is evidence in neuroscience indicating that prediction of spatial and temporal patterns in the brain plays a key role in perception. This has given rise to prediction-based fusion as a method of combining information from audio and visual modalities. Models are trained on a per-class basis, to

  17. Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications to Automatic Target Recognition

    National Research Council Canada - National Science Library

    Keller, Martin G

    1999-01-01

    In the fields of computational vision and image understanding, the object recognition problem can be formulated as a problem of matching a collection of model features to features extracted from an observed scene...

  18. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  19. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  20. Modelling management process of key drivers for economic sustainability in the modern conditions of economic development

    Directory of Open Access Journals (Sweden)

    Pishchulina E.S.

    2017-01-01

    Full Text Available The text is about issues concerning the management of driver for manufacturing enterprise economic sustainability and manufacturing enterprise sustainability assessment as the key aspect of the management of enterprise economic sustainability. The given issues become topical as new requirements for the methods of manufacturing enterprise management in the modern conditions of market economy occur. An economic sustainability model that is considered in the article is an integration of enterprise economic growth, economic balance of external and internal environment and economic sustainability. The method of assessment of economic sustainability of a manufacturing enterprise proposed in the study allows to reveal some weaknesses in the enterprise performance, and untapped reserves, which can be further used to improve the economic sustainability and efficiency of the enterprise. The management of manufacturing enterprise economic sustainability is one of the most important factors of business functioning and development in modern market economy. The relevance of this trend is increasing in accordance with the objective requirements of the growing volumes of production and sale, the increasing complexity of economic relations, changing external environment of an enterprise.

  1. iPSC-Based Models to Unravel Key Pathogenetic Processes Underlying Motor Neuron Disease Development

    Directory of Open Access Journals (Sweden)

    Irene Faravelli

    2014-10-01

    Full Text Available Motor neuron diseases (MNDs are neuromuscular disorders affecting rather exclusively upper motor neurons (UMNs and/or lower motor neurons (LMNs. The clinical phenotype is characterized by muscular weakness and atrophy leading to paralysis and almost invariably death due to respiratory failure. Adult MNDs include sporadic and familial amyotrophic lateral sclerosis (sALS-fALS, while the most common infantile MND is represented by spinal muscular atrophy (SMA. No effective treatment is ccurrently available for MNDs, as for the vast majority of neurodegenerative disorders, and cures are limited to supportive care and symptom relief. The lack of a deep understanding of MND pathogenesis accounts for the difficulties in finding a cure, together with the scarcity of reliable in vitro models. Recent progresses in stem cell field, in particular in the generation of induced Pluripotent Stem Cells (iPSCs has made possible for the first time obtaining substantial amounts of human cells to recapitulate in vitro some of the key pathogenetic processes underlying MNDs. In the present review, recently published studies involving the use of iPSCs to unravel aspects of ALS and SMA pathogenesis are discussed with an overview of their implications in the process of finding a cure for these still orphan disorders.

  2. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Breebaart, Jeroen; Buhan, I.R.; Veldhuis, Raymond N.J.; Vijaya Kumar, B.V.K.; Prabhakar, Salil; Ross, Arun A.

    2010-01-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved

  3. Maximum Key Size and Classification Performance of Fuzzy Commitment for Gaussian Modeled Biometric Sources

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Breebaart, J.; Buhan, I.R.; Veldhuis, Raymond N.J.

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from, or binding a key to the binary vector derived from the

  4. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Richard Washington

    2008-11-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  5. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  6. Facial Feature Tracking Using Efficient Particle Filter and Active Appearance Model

    Directory of Open Access Journals (Sweden)

    Durkhyun Cho

    2014-09-01

    Full Text Available For natural human-robot interaction, the location and shape of facial features in a real environment must be identified. One robust method to track facial features is by using a particle filter and the active appearance model. However, the processing speed of this method is too slow for utilization in practice. In order to improve the efficiency of the method, we propose two ideas: (1 changing the number of particles situationally, and (2 switching the prediction model depending upon the degree of the importance of each particle using a combination strategy and a clustering strategy. Experimental results show that the proposed method is about four times faster than the conventional method using a particle filter and the active appearance model, without any loss of performance.

  7. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  8. Features of microscopic pedestrian movement in a panic situation based on cellular automata model

    Science.gov (United States)

    Ibrahim, Najihah; Hassan, Fadratul Hafinaz

    2017-10-01

    Pedestrian movement is the one of the subset for the crowd management under simulation objective. During panic situation, pedestrian usually will create a microscopic movement that lead towards the self-organization. During self-organizing, the behavioral and physical factors had caused the mass effect on the pedestrian movement. The basic CA model will create a movement path for each pedestrian over a time step. However, due to the factors immerge, the CA model needs some enhancement that will establish a real simulation state. Hence, this concept paper will discuss on the enhanced features of CA model for microscopic pedestrian movement during panic situation for a better pedestrian simulation.

  9. The Key Lake project

    International Nuclear Information System (INIS)

    1991-01-01

    Key Lake is located in the Athabasca sand stone basin, 640 kilometers north of Saskatoon, Saskatchewan, Canada. The three sources of ore at Key Lake contain 70 100 tonnes of uranium. Features of the Key Lake Project were described under the key headings: work force, mining, mill process, tailings storage, permanent camp, environmental features, worker health and safety, and economic benefits. Appendices covering the historical background, construction projects, comparisons of western world mines, mining statistics, Northern Saskatchewan surface lease, and Key Lake development and regulatory agencies were included

  10. AUTOMATED FEATURE BASED TLS DATA REGISTRATION FOR 3D BUILDING MODELING

    Directory of Open Access Journals (Sweden)

    K. Kitamura

    2012-07-01

    Full Text Available In this paper we present a novel method for the registration of point cloud data obtained using terrestrial laser scanner (TLS. The final goal of our investigation is the automated reconstruction of CAD drawings and the 3D modeling of objects surveyed by TLS. Because objects are scanned from multiple positions, individual point cloud need to be registered to the same coordinate system. We propose in this paper an automated feature based registration procedure. Our proposed method does not require the definition of initial values or the placement of targets and is robust against noise and background elements. A feature extraction procedure is performed for each point cloud as pre-processing. The registration of the point clouds from different viewpoints is then performed by utilizing the extracted features. The feature extraction method which we had developed previously (Kitamura, 2010 is used: planes and edges are extracted from the point cloud. By utilizing these features, the amount of information to process is reduced and the efficiency of the whole registration procedure is increased. In this paper, we describe the proposed algorithm and, in order to demonstrate its effectiveness, we show the results obtained by using real data.

  11. Dome effect of black carbon and its key influencing factors: a one-dimensional modelling study

    Science.gov (United States)

    Wang, Zilin; Huang, Xin; Ding, Aijun

    2018-02-01

    Black carbon (BC) has been identified to play a critical role in aerosol-planetary boundary layer (PBL) interaction and further deterioration of near-surface air pollution in megacities, which has been referred to as the dome effect. However, the impacts of key factors that influence this effect, such as the vertical distribution and aging processes of BC, as well as the underlying land surface, have not been quantitatively explored yet. Here, based on available in situ measurements of meteorology and atmospheric aerosols together with the meteorology-chemistry online coupled model WRF-Chem, we conduct a set of parallel simulations to quantify the roles of these factors in influencing the BC dome effect and surface haze pollution. Furthermore, we discuss the main implications of the results to air pollution mitigation in China. We found that the impact of BC on the PBL is very sensitive to the altitude of aerosol layer. The upper-level BC, especially that near the capping inversion, is more essential in suppressing the PBL height and weakening the turbulent mixing. The dome effect of BC tends to be significantly intensified as BC mixed with scattering aerosols during winter haze events, resulting in a decrease in PBL height by more than 15 %. In addition, the dome effect is more substantial (up to 15 %) in rural areas than that in the urban areas with the same BC loading, indicating an unexpected regional impact of such an effect to air quality in countryside. This study indicates that China's regional air pollution would greatly benefit from BC emission reductions, especially those from elevated sources from chimneys and also domestic combustion in rural areas, through weakening the aerosol-boundary layer interactions that are triggered by BC.

  12. Solid images for geostructural mapping and key block modeling of rock discontinuities

    Science.gov (United States)

    Assali, Pierre; Grussenmeyer, Pierre; Villemin, Thierry; Pollet, Nicolas; Viguier, Flavien

    2016-04-01

    Rock mass characterization is obviously a key element in rock fall hazard analysis. Managing risk and determining the most adapted reinforcement method require a proper understanding of the considered rock mass. Description of discontinuity sets is therefore a crucial first step in the reinforcement work design process. The on-field survey is then followed by a structural modeling in order to extrapolate the data collected at the rock surface to the inner part of the massif. Traditional compass survey and manual observations can be undoubtedly surpassed by dense 3D data such as LiDAR or photogrammetric point clouds. However, although the acquisition phase is quite fast and highly automated, managing, handling and exploiting such great amount of collected data is an arduous task and especially for non specialist users. In this study, we propose a combined approached using both 3D point clouds (from LiDAR or image matching) and 2D digital images, gathered into the concept of ''solid image''. This product is the connection between the advantages of classical true colors 2D digital images, accessibility and interpretability, and the particular strengths of dense 3D point clouds, i.e. geometrical completeness and accuracy. The solid image can be considered as the information support for carrying-out a digital survey at the surface of the outcrop without being affected by traditional deficiencies (lack of data and sampling difficulties due to inaccessible areas, safety risk in steep sectors, etc.). Computational tools presented in this paper have been implemented into one standalone software through a graphical user interface helping operators with the completion of a digital geostructural survey and analysis. 3D coordinates extraction, 3D distances and area measurement, planar best-fit for discontinuity orientation, directional roughness profiles, block size estimation, and other tools have been experimented on a calcareous quarry in the French Alps.

  13. Linear planimetric feature domains modeling for multisensor fusion in remote sensing

    Science.gov (United States)

    Pigeon, Luc; Solaiman, Basel; Toutin, Thierry; Thomson, Keith P. B.

    2000-04-01

    The availability of multi-sensed data, especially in remote sensing, leads to new possibilities in the area of target recognition. In fact, the information contained in an individual sensor represents only one facet of the reality. The use of several sensors aims at covering different facets of real world objects. In this study, the targets to recognize are the planimetric features (i.e. roads, energy transmission lines, railroads and rivers). The sensors used are visible type satellite sensors (SPOT Panchromatic and Landsat TM) as well as radar satellites (Radarsat fine mode and ERS-1). Sensor resolutions range from 8 to 30 meters/pixel. In this study, the modeling is not limited, as it is generally the case, to the problem feature's reality, but to each sensor that will be used. Moreover, the decision space (here a 3D symbolic map) has to be modeled in the same way as the reality and sensors to lead to a coherent and uniform system. Each model is developed using an object- oriented approach. Each reality-object is defined through its radiometric, geometric and topologic feature. The sensor model objects are defined in accordance to image acquisition and definition, including the stereo image cases (for SPOT and Radarsat). Finally, the decision space objects define the resulting 3D symbolic map where, for instance, a pixel attributes contain classification information as well as position, accuracy, reality object's membership values, etc.

  14. Research on texture feature of RS image based on cloud model

    Science.gov (United States)

    Wang, Zuocheng; Xue, Lixia

    2008-10-01

    This paper presents a new method applied to texture feature representation in RS image based on cloud model. Aiming at the fuzziness and randomness of RS image, we introduce the cloud theory into RS image processing in a creative way. The digital characteristics of clouds well integrate the fuzziness and randomness of linguistic terms in a unified way and map the quantitative and qualitative concepts. We adopt texture multi-dimensions cloud to accomplish vagueness and randomness handling of texture feature in RS image. The method has two steps: 1) Correlativity analyzing of texture statistical parameters in Grey Level Co-occurrence Matrix (GLCM) and parameters fuzzification. GLCM can be used to representing the texture feature in many aspects perfectly. According to the expressive force of texture statistical parameters and by Correlativity analyzing of texture statistical parameters, we can abstract a few texture statistical parameters that can best represent the texture feature. By the fuzziness algorithm, the texture statistical parameters can be mapped to fuzzy cloud space. 2) Texture multi-dimensions cloud model constructing. Based on the abstracted texture statistical parameters and fuzziness cloud space, texture multi-dimensions cloud model can be constructed in micro-windows of image. According to the membership of texture statistical parameters, we can achieve the samples of cloud-drop. By backward cloud generator, the digital characteristics of texture multi-dimensions cloud model can be achieved and the Mathematical Expected Hyper Surface(MEHS) of multi-dimensions cloud of micro-windows can be constructed. At last, the weighted sum of the 3 digital characteristics of micro-window cloud model was proposed and used in texture representing in RS image. The method we develop is demonstrated by applying it to texture representing in many RS images, various performance studies testify that the method is both efficient and effective. It enriches the cloud

  15. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  16. Laboratory infrastructure driven key performance indicator development using the smart grid architecture model

    DEFF Research Database (Denmark)

    Syed, Mazheruddin H.; Guillo-Sansano, Efren; Blair, Steven M.

    2017-01-01

    This study presents a methodology for collaboratively designing laboratory experiments and developing key performance indicators for the testing and validation of novel power system control architectures in multiple laboratory environments. The contribution makes use of the smart grid architecture...

  17. The bearing capacity experimental determination of the keyed joints models in the transport construction

    Directory of Open Access Journals (Sweden)

    Dovzhenko Oksana

    2017-01-01

    Full Text Available The joints ensure the joint performance of the load carrying structural systems and they are the most responsible and important elements. Keyed joints are widely used in construction. They are characterized by an increased resistance to shear. On these grounds the structural concepts of keyed joints need further improvement. The article presents the research results of experimental test pieces five series in the form of single keys and one-keyed joints. Those samples have been tested in Poltava National Technical Yuriy Kondratyuk University. Follow strength factors have been varied: geometric parameters of joints (depth, height, width and their ratio; angle of support surface (rectangular, trapezoidal and triangular key; level of compression; reinforcement (quality of reinforcement and the nature of its location; jointing width. The samples were made of heavy-weight, expanded clay and fibre concrete. The experiments program includes influence study both of one of these factors and their combinations. The deformations, nature of failure, the ultimate load have been studied. Structural parameters of keyed joints which ensure the efficient behaviour have been installed.

  18. AN AUTOMATIC FEATURE BASED MODEL FOR CELL SEGMENTATION FROM CONFOCAL MICROSCOPY VOLUMES

    OpenAIRE

    Delibaltov, Diana; Ghosh, Pratim; Veeman, Michael; Smith, William; Manjunath, B.S.

    2011-01-01

    We present a model for the automated segmentation of cells from confocal microscopy volumes of biological samples. The segmentation task for these images is exceptionally challenging due to weak boundaries and varying intensity during the imaging process. To tackle this, a two step pruning process based on the Fast Marching Method is first applied to obtain an over-segmented image. This is followed by a merging step based on an effective feature representation. The algorithm is applied on two...

  19. Elysium region, mars: Tests of lithospheric loading models for the formation of tectonic features

    International Nuclear Information System (INIS)

    Hall, J.L.; Solomon, S.C.; Head, J.W.

    1986-01-01

    The second largest volcanic province on Mars lies in the Elysium region. Like the larger Tharsis province, Elysium is marked by a topographic rise and a broad free air gravity anomaly and also exhibits a complex assortment of tectonic and volcanic features. We test the hypothesis that the tectonic features in the Elysium region are the product of stresses produced by loading of the Martian lithosphere. We consider loading at three different scales: local loading by individual volcanoes, regional loading of the lithosphere from above or below, and quasi-global loading by Tharsis. A comparison of flexural stresses with lithospheric strength and with the inferred maximum depth of faulting confirms that concentric graben around Elysium Mons can be explained as resulting from local flexure of an elastic lithosphere about 50 km thick in response to the volcano load. Volcanic loading on a regional scale, however, leads to predicted stresses inconsistent with all observed tectonic features, suggesting that loading by widespread emplacement of thick plains deposits was not an important factor in the tectonic evolution of the Elysium region. A number of linear extensional features oriented generally NW-SE may have been the result of flexural uplift of the lithosphere on the scale of the Elysium rise. The global stress field associated with the support of the Tharsis rise appears to have influenced the development of many of the tectonic features in the Elysium region, including Cerberus Rupes and the systems of ridges in eastern and western Elysium. The comparisons of stress models for Elysium with the preserved tectonic features support a succession of stress fields operating at different times in the region

  20. Generic feature of future crossing of phantom divide in viable f(R) gravity models

    International Nuclear Information System (INIS)

    Bamba, Kazuharu; Geng, Chao-Qiang; Lee, Chung-Chi

    2010-01-01

    We study the equation of state for dark energy and explicitly demonstrate that the future crossings of the phantom divide line w DE = −1 are the generic feature in the existing viable f(R) gravity models. We also explore the future evolution of the cosmological horizon entropy and illustrate that the cosmological horizon entropy oscillates with time due to the oscillatory behavior of the Hubble parameter. The important cosmological consequence is that in the future, the sign of the time derivative of the Hubble parameter changes from negative to positive in these viable f(R) gravity models

  1. Variable domain N-linked glycosylation and negative surface charge are key features of monoclonal ACPA: implications for B-cell selection.

    Science.gov (United States)

    Lloyd, Katy A; Steen, Johanna; Amara, Khaled; Titcombe, Philip J; Israelsson, Lena; Lundström, Susanna L; Zhou, Diana; Zubarev, Roman A; Reed, Evan; Piccoli, Luca; Gabay, Cem; Lanzavecchia, Antonio; Baeten, Dominique; Lundberg, Karin; Mueller, Daniel L; Klareskog, Lars; Malmström, Vivianne; Grönwall, Caroline

    2018-03-07

    Autoreactive B cells have a central role in the pathogenesis of rheumatoid arthritis (RA), and recent findings have proposed that anti-citrullinated protein autoantibodies (ACPA) may be directly pathogenic. Herein, we demonstrate the frequency of variable-region glycosylation in single-cell cloned mAbs. A total of 14 ACPA mAbs were evaluated for predicted N-linked glycosylation motifs in silico, and compared to 452 highly-mutated mAbs from RA patients and controls. Variable region N-linked motifs (N-X-S/T) were strikingly prevalent within ACPA (100%) compared to somatically hypermutated (SHM) RA bone marrow plasma cells (21%), and synovial plasma cells from seropositive (39%) and seronegative RA (7%). When normalized for SHM, ACPA still had significantly higher frequency of N-linked motifs compared to all studied mAbs including highly-mutated HIV broadly-neutralizing and malaria-associated mAbs. The Fab glycans of ACPA-mAbs were highly sialylated, contributed to altered charge, but did not influence antigen binding. The analysis revealed evidence of unusual B-cell selection pressure and SHM-mediated decreased in surface charge and isoelectric point in ACPA. It is still unknown how these distinct features of anti-citrulline immunity may have an impact on pathogenesis. However, it is evident that they offer selective advantages for ACPA+ B cells, possibly also through non-antigen driven mechanisms. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    Science.gov (United States)

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  3. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    Directory of Open Access Journals (Sweden)

    Hossam M Zawbaa

    Full Text Available Poly-lactide-co-glycolide (PLGA is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP, multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR. The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  4. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    Science.gov (United States)

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  5. Scale Effect Features During Simulation Tests of 3D Printer-Made Vane Pump Models

    Directory of Open Access Journals (Sweden)

    A. I. Petrov

    2015-01-01

    Full Text Available The article "Scale effect features during simulation tests of 3D printer-made vane pump models" discusses the influence of scale effect on translation of pump parameters from models, made with 3D-prototyping methods, to full-scale pumps. Widely spread now 3D-printer production of pump model parts or entire layouts can be considered to be the main direction of vane pumps modeling. This is due to the widespread development of pumps in different CAD-systems and the significant cost reduction in manufacturing such layouts, as compared to casting and other traditional methods.The phenomenon of scale effect in vane hydraulic machines, i.e. violation of similarity conditions when translating pump parameters from model to full-scale pumps is studied in detail in the theory of similarity. However, as the experience in the 3d-printer manufacturing of models and their testing gains it becomes clear that accounting large-scale effect for such models has a number of differences from the conventional techniques. The reason for this is the features of micro and macro geometry of parts made in different kinds of 3D-printers (extrusive, and powder sintering methods, ultraviolet light, etc..The article considers the converting features of external and internal mechanical losses, leakages, and hydraulic losses, as well as the specifics of the balance tests for such models. It also presents the basic conversion formulas describing the factors affecting the value of these losses. It shows photographs of part surfaces of models, manufactured by 3D-printer and subjected to subsequent machining. The paper shows results of translation from several pump models (layouts to the full-scale ones, using the techniques described, and it also shows that the error in translation efficiency does not exceed 1.15%. The conclusion emphasizes the importance of the balance tests of models to accumulate statistical data on the scale effect for pump layouts made by different 3D

  6. An iterative genetic and dynamical modelling approach identifies novel features of the gene regulatory network underlying melanocyte development.

    Science.gov (United States)

    Greenhill, Emma R; Rocco, Andrea; Vibert, Laura; Nikaido, Masataka; Kelsh, Robert N

    2011-09-01

    The mechanisms generating stably differentiated cell-types from multipotent precursors are key to understanding normal development and have implications for treatment of cancer and the therapeutic use of stem cells. Pigment cells are a major derivative of neural crest stem cells and a key model cell-type for our understanding of the genetics of cell differentiation. Several factors driving melanocyte fate specification have been identified, including the transcription factor and master regulator of melanocyte development, Mitf, and Wnt signalling and the multipotency and fate specification factor, Sox10, which drive mitf expression. While these factors together drive multipotent neural crest cells to become specified melanoblasts, the mechanisms stabilising melanocyte differentiation remain unclear. Furthermore, there is controversy over whether Sox10 has an ongoing role in melanocyte differentiation. Here we use zebrafish to explore in vivo the gene regulatory network (GRN) underlying melanocyte specification and differentiation. We use an iterative process of mathematical modelling and experimental observation to explore methodically the core melanocyte GRN we have defined. We show that Sox10 is not required for ongoing differentiation and expression is downregulated in differentiating cells, in response to Mitfa and Hdac1. Unexpectedly, we find that Sox10 represses Mitf-dependent expression of melanocyte differentiation genes. Our systems biology approach allowed us to predict two novel features of the melanocyte GRN, which we then validate experimentally. Specifically, we show that maintenance of mitfa expression is Mitfa-dependent, and identify Sox9b as providing an Mitfa-independent input to melanocyte differentiation. Our data supports our previous suggestion that Sox10 only functions transiently in regulation of mitfa and cannot be responsible for long-term maintenance of mitfa expression; indeed, Sox10 is likely to slow melanocyte differentiation in the

  7. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  8. Key characteristics of successful science learning: the promise of learning by modelling

    NARCIS (Netherlands)

    Mulder, Y.G.; Lazonder, Adrianus W.; de Jong, Anthonius J.M.

    2015-01-01

    The basic premise underlying this research is that scientific phenomena are best learned by creating an external representation that complies with the complex and dynamic nature of such phenomena. Effective representations are assumed to incorporate three key characteristics: they are graphical,

  9. Simulation on scattering features of biological tissue based on generated refractive-index model

    International Nuclear Information System (INIS)

    Wang Baoyong; Ding Zhihua

    2011-01-01

    Important information on morphology of biological tissue can be deduced from elastic scattering spectra, and their analyses are based on the known refractive-index model of tissue. In this paper, a new numerical refractive-index model is put forward, and its scattering properties are intensively studied. Spectral decomposition [1] is a widely used method to generate random medium in geology, but it is never used in biology. Biological tissue is different from geology in the sense of random medium. Autocorrelation function describe almost all of features in geology, but biological tissue is not as random as geology, its structure is regular in the sense of fractal geometry [2] , and fractal dimension can be used to describe its regularity under random. Firstly scattering theories of this fractal media are reviewed. Secondly the detailed generation process of refractive-index is presented. Finally the scattering features are simulated in FDTD (Finite Difference Time Domain) Solutions software. From the simulation results, we find that autocorrelation length and fractal dimension controls scattering feature of biological tissue.

  10. Simulation on scattering features of biological tissue based on generated refractive-index model

    Energy Technology Data Exchange (ETDEWEB)

    Wang Baoyong; Ding Zhihua, E-mail: zh_ding@zju.edu.cn [State Key Lab of Modern Optical Instrumentation, Zhejiang University 38 Zheda Rd., Hangzhou 310027 (China)

    2011-01-01

    Important information on morphology of biological tissue can be deduced from elastic scattering spectra, and their analyses are based on the known refractive-index model of tissue. In this paper, a new numerical refractive-index model is put forward, and its scattering properties are intensively studied. Spectral decomposition{sup [1]} is a widely used method to generate random medium in geology, but it is never used in biology. Biological tissue is different from geology in the sense of random medium. Autocorrelation function describe almost all of features in geology, but biological tissue is not as random as geology, its structure is regular in the sense of fractal geometry{sup [2]}, and fractal dimension can be used to describe its regularity under random. Firstly scattering theories of this fractal media are reviewed. Secondly the detailed generation process of refractive-index is presented. Finally the scattering features are simulated in FDTD (Finite Difference Time Domain) Solutions software. From the simulation results, we find that autocorrelation length and fractal dimension controls scattering feature of biological tissue.

  11. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  12. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  13. LOCAL INDEPENDENCE FEATURE SCREENING FOR NONPARAMETRIC AND SEMIPARAMETRIC MODELS BY MARGINAL EMPIRICAL LIKELIHOOD

    Science.gov (United States)

    Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao

    2015-01-01

    We consider an independence feature screening technique for identifying explanatory variables that locally contribute to the response variable in high-dimensional regression analysis. Without requiring a specific parametric form of the underlying data model, our approach accommodates a wide spectrum of nonparametric and semiparametric model families. To detect the local contributions of explanatory variables, our approach constructs empirical likelihood locally in conjunction with marginal nonparametric regressions. Since our approach actually requires no estimation, it is advantageous in scenarios such as the single-index models where even specification and identification of a marginal model is an issue. By automatically incorporating the level of variation of the nonparametric regression and directly assessing the strength of data evidence supporting local contribution from each explanatory variable, our approach provides a unique perspective for solving feature screening problems. Theoretical analysis shows that our approach can handle data dimensionality growing exponentially with the sample size. With extensive theoretical illustrations and numerical examples, we show that the local independence screening approach performs promisingly. PMID:27242388

  14. Preliminary Review of Models, Assumptions, and Key Data used in Performance Assessments and Composite Analysis at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Arthur S. Rood; Swen O. Magnuson

    2009-07-01

    This document is in response to a request by Ming Zhu, DOE-EM to provide a preliminary review of existing models and data used in completed or soon to be completed Performance Assessments and Composite Analyses (PA/CA) documents, to identify codes, methodologies, main assumptions, and key data sets used.

  15. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  16. Integrated Modeling & Development of Emission Scenarios for Methane and Key Indirect Greenhouse Gases

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Atul K.

    2005-09-30

    This report outlines main accomplishments on the development of Emission inventories and Scenarios for Key Indirect Greenhouse Gases (CO, VOCs, NOx) and methane supported by Office of Science (BER), US Department of Energy. This research produced 3 journal articles, 1 book chapter, and 4 research articles/abstracts in conference proceedings. In addition, this grant supported two PhD students and one undergraduate student at UIUC.

  17. Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-04-10

    In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume

  18. A framework for treating DSM-5 alternative model for personality disorder features.

    Science.gov (United States)

    Hopwood, Christopher J

    2018-04-15

    Despite its demonstrated empirical superiority over the DSM-5 Section 2 categorical model of personality disorders for organizing the features of personality pathology, limitations remain with regard to the translation of the DSM-5 Section 3 alternative model of personality disorders (AMPD) to clinical practice. The goal of this paper is to outline a general and preliminary framework for approaching treatment from the perspective of the AMPD. Specific techniques are discussed for the assessment and treatment of both Criterion A personality dysfunction and Criterion B maladaptive traits. A concise and step-by-step model is presented for clinical decision making with the AMPD, in the hopes of offering clinicians a framework for treating personality pathology and promoting further research on the clinical utility of the AMPD. Copyright © 2018 John Wiley & Sons, Ltd. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  20. Thermodynamic model of social influence on two-dimensional square lattice: Case for two features

    Science.gov (United States)

    Genzor, Jozef; Bužek, Vladimír; Gendiar, Andrej

    2015-02-01

    We propose a thermodynamic multi-state spin model in order to describe equilibrial behavior of a society. Our model is inspired by the Axelrod model used in social network studies. In the framework of the statistical mechanics language, we analyze phase transitions of our model, in which the spin interaction J is interpreted as a mutual communication among individuals forming a society. The thermal fluctuations introduce a noise T into the communication, which suppresses long-range correlations. Below a certain phase transition point Tt, large-scale clusters of the individuals, who share a specific dominant property, are formed. The measure of the cluster sizes is an order parameter after spontaneous symmetry breaking. By means of the Corner transfer matrix renormalization group algorithm, we treat our model in the thermodynamic limit and classify the phase transitions with respect to inherent degrees of freedom. Each individual is chosen to possess two independent features f = 2 and each feature can assume one of q traits (e.g. interests). Hence, each individual is described by q2 degrees of freedom. A single first-order phase transition is detected in our model if q > 2, whereas two distinct continuous phase transitions are found if q = 2 only. Evaluating the free energy, order parameters, specific heat, and the entanglement von Neumann entropy, we classify the phase transitions Tt(q) in detail. The permanent existence of the ordered phase (the large-scale cluster formation with a non-zero order parameter) is conjectured below a non-zero transition point Tt(q) ≈ 0.5 in the asymptotic regime q → ∞.

  1. ARX model-based damage sensitive features for structural damage localization using output-only measurements

    Science.gov (United States)

    Roy, Koushik; Bhattacharya, Bishakh; Ray-Chaudhuri, Samit

    2015-08-01

    The study proposes a set of four ARX model (autoregressive model with exogenous input) based damage sensitive features (DSFs) for structural damage detection and localization using the dynamic responses of structures, where the information regarding the input excitation may not be available. In the proposed framework, one of the output responses of a multi-degree-of-freedom system is assumed as the input and the rest are considered as the output. The features are based on ARX model coefficients, Kolmogorov-Smirnov (KS) test statistical distance, and the model residual error. At first, a mathematical formulation is provided to establish the relation between the change in ARX model coefficients and the normalized stiffness of a structure. KS test parameters are then described to show the sensitivity of statistical distance of ARX model residual error with the damage location. The efficiency of the proposed set of DSFs is evaluated by conducting numerical studies involving a shear building and a steel moment-resisting frame. To simulate the damage scenarios in these structures, stiffness degradation of different elements is considered. It is observed from this study that the proposed set of DSFs is good indicator for damage location even in the presence of damping, multiple damages, noise, and parametric uncertainties. The performance of these DSFs is compared with mode shape curvature-based approach for damage localization. An experimental study has also been conducted on a three-dimensional six-storey steel moment frame to understand the performance of these DSFs under real measurement conditions. It has been observed that the proposed set of DSFs can satisfactorily localize damage in the structure.

  2. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease.

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S; Kovács, Attila D; Meyerholz, David K; Trantzas, Constantin; Lambertz, Allyn M; Darbro, Benjamin W; Weber, Krystal L; White, Katherine A M; Rheeden, Richard V; Kruer, Michael C; Dacken, Brian A; Wang, Xiao-Jun; Davis, Bryan T; Rohret, Judy A; Struzynski, Jason T; Rohret, Frank A; Weimer, Jill M; Pearce, David A

    2015-11-15

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S.; Kovács, Attila D.; Meyerholz, David K.; Trantzas, Constantin; Lambertz, Allyn M.; Darbro, Benjamin W.; Weber, Krystal L.; White, Katherine A.M.; Rheeden, Richard V.; Kruer, Michael C.; Dacken, Brian A.; Wang, Xiao-Jun; Davis, Bryan T.; Rohret, Judy A.; Struzynski, Jason T.; Rohret, Frank A.; Weimer, Jill M.; Pearce, David A.

    2015-01-01

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. PMID:26374845

  4. Quantum key management

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Richard John; Thrasher, James Thomas; Nordholt, Jane Elizabeth

    2016-11-29

    Innovations for quantum key management harness quantum communications to form a cryptography system within a public key infrastructure framework. In example implementations, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a Merkle signature scheme (using Winternitz one-time digital signatures or other one-time digital signatures, and Merkle hash trees) to constitute a cryptography system. More generally, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a hash-based signature scheme. This provides a secure way to identify, authenticate, verify, and exchange secret cryptographic keys. Features of the quantum key management innovations further include secure enrollment of users with a registration authority, as well as credential checking and revocation with a certificate authority, where the registration authority and/or certificate authority can be part of the same system as a trusted authority for quantum key distribution.

  5. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  6. FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection.

    Science.gov (United States)

    Noto, Keith; Brodley, Carla; Slonim, Donna

    2012-01-01

    Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called "normal" instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach.

  7. A food recognition system for diabetic patients based on an optimized bag-of-features model.

    Science.gov (United States)

    Anthimopoulos, Marios M; Gianola, Lauro; Scarnato, Luca; Diem, Peter; Mougiakakou, Stavroula G

    2014-07-01

    Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

  8. Investigation of the blockchain systems’ scalability features using the agent based modelling

    OpenAIRE

    Šulnius, Aleksas

    2017-01-01

    Investigation of the BlockChain Systems’ Scalability Features using the Agent Based Modelling. BlockChain currently is in the spotlight of all the FinTech industry. This technology is being called revolutionary, ground breaking, disruptive and even the WEB 3.0. On the other hand it is widely agreed that the BlockChain is in its early stages of development. In its current state BlockChain is in similar position that the Internet was in the early nineties. In order for this technology to gain m...

  9. Discriminative feature-rich models for syntax-based machine translation.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  10. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  11. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation.

    Science.gov (United States)

    Jing, Zhang; Sheng, Kang Bao

    2015-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  12. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Science.gov (United States)

    Jing, Zhang; Sheng, Kang Bao

    2016-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods. PMID:27293478

  13. Calpain mediates pulmonary vascular remodeling in rodent models of pulmonary hypertension, and its inhibition attenuates pathologic features of disease

    Science.gov (United States)

    Ma, Wanli; Han, Weihong; Greer, Peter A.; Tuder, Rubin M.; Toque, Haroldo A.; Wang, Kevin K.W.; Caldwell, R. William; Su, Yunchao

    2011-01-01

    Pulmonary hypertension is a severe and progressive disease, a key feature of which is pulmonary vascular remodeling. Several growth factors, including EGF, PDGF, and TGF-β1, are involved in pulmonary vascular remodeling during pulmonary hypertension. However, increased knowledge of the downstream signaling cascades is needed if effective clinical interventions are to be developed. In this context, calpain provides an interesting candidate therapeutic target, since it is activated by EGF and PDGF and has been reported to activate TGF-β1. Thus, in this study, we examined the role of calpain in pulmonary vascular remodeling in two rodent models of pulmonary hypertension. These data showed that attenuated calpain activity in calpain-knockout mice or rats treated with a calpain inhibitor resulted in prevention of increased right ventricular systolic pressure, right ventricular hypertrophy, as well as collagen deposition and thickening of pulmonary arterioles in models of hypoxia- and monocrotaline-induced pulmonary hypertension. Additionally, inhibition of calpain in vitro blocked intracellular activation of TGF-β1, which led to attenuated Smad2/3 phosphorylation and collagen synthesis. Finally, smooth muscle cells of pulmonary arterioles from patients with pulmonary arterial hypertension showed higher levels of calpain activation and intracellular active TGF-β. Our data provide evidence that calpain mediates EGF- and PDGF-induced collagen synthesis and proliferation of pulmonary artery smooth muscle cells via an intracrine TGF-β1 pathway in pulmonary hypertension. PMID:22005303

  14. Application of a three-feature dispersed-barrier hardening model to neutron-irradiated Fe–Cr model alloys

    International Nuclear Information System (INIS)

    Bergner, F.; Pareige, C.; Hernández-Mayoral, M.; Malerba, L.; Heintze, C.

    2014-01-01

    An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe–Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α′-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys

  15. Application of a three-feature dispersed-barrier hardening model to neutron-irradiated Fe–Cr model alloys

    Energy Technology Data Exchange (ETDEWEB)

    Bergner, F., E-mail: f.bergner@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany); Pareige, C. [Groupe de Physique des Matériaux, Université et INSA de Rouen, UMR 6634 CNRS, Avenue de l’Université, BP 12, 76801 Saint Etienne du Rouvray (France); Hernández-Mayoral, M. [Division of Materials, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain); Malerba, L. [SCK-CEN, Nuclear Material Science Institute, Boeretang 200, B-2400 Mol (Belgium); Heintze, C. [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)

    2014-05-01

    An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe–Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α′-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys.

  16. Application of a three-feature dispersed-barrier hardening model to neutron-irradiated Fe-Cr model alloys

    Science.gov (United States)

    Bergner, F.; Pareige, C.; Hernández-Mayoral, M.; Malerba, L.; Heintze, C.

    2014-05-01

    An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe-Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α‧-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys.

  17. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    Science.gov (United States)

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  18. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  19. Features of genomic organization in a nucleotide-resolution molecular model of the Escherichia coli chromosome.

    Science.gov (United States)

    Hacker, William C; Li, Shuxiang; Elcock, Adrian H

    2017-07-27

    We describe structural models of the Escherichia coli chromosome in which the positions of all 4.6 million nucleotides of each DNA strand are resolved. Models consistent with two basic chromosomal orientations, differing in their positioning of the origin of replication, have been constructed. In both types of model, the chromosome is partitioned into plectoneme-abundant and plectoneme-free regions, with plectoneme lengths and branching patterns matching experimental distributions, and with spatial distributions of highly-transcribed chromosomal regions matching recent experimental measurements of the distribution of RNA polymerases. Physical analysis of the models indicates that the effective persistence length of the DNA and relative contributions of twist and writhe to the chromosome's negative supercoiling are in good correspondence with experimental estimates. The models exhibit characteristics similar to those of 'fractal globules,' and even the most genomically-distant parts of the chromosome can be physically connected, through paths combining linear diffusion and inter-segmental transfer, by an average of only ∼10 000 bp. Finally, macrodomain structures and the spatial distributions of co-expressed genes are analyzed: the latter are shown to depend strongly on the overall orientation of the chromosome. We anticipate that the models will prove useful in exploring other static and dynamic features of the bacterial chromosome. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Upscaling key ecosystem functions across the conterminous United States by a water‐centric ecosystem model

    Science.gov (United States)

    Ge Sun; Peter Caldwell; Asko Noormets; Steven G. McNulty; Erika Cohen; al. et.

    2011-01-01

    We developed a water‐centric monthly scale simulation model (WaSSI‐C) by integrating empirical water and carbon flux measurements from the FLUXNET network and an existing water supply and demand accounting model (WaSSI). The WaSSI‐C model was evaluated with basin‐scale evapotranspiration (ET), gross ecosystem productivity (GEP), and net ecosystem exchange (NEE)...

  1. An assessment of key model parametric uncertainties in projections of Greenland Ice Sheet behavior

    Directory of Open Access Journals (Sweden)

    P. J. Applegate

    2012-05-01

    Full Text Available Lack of knowledge about the values of ice sheet model input parameters introduces substantial uncertainty into projections of Greenland Ice Sheet contributions to future sea level rise. Computer models of ice sheet behavior provide one of several means of estimating future sea level rise due to mass loss from ice sheets. Such models have many input parameters whose values are not well known. Recent studies have investigated the effects of these parameters on model output, but the range of potential future sea level increases due to model parametric uncertainty has not been characterized. Here, we demonstrate that this range is large, using a 100-member perturbed-physics ensemble with the SICOPOLIS ice sheet model. Each model run is spun up over 125 000 yr using geological forcings and subsequently driven into the future using an asymptotically increasing air temperature anomaly curve. All modeled ice sheets lose mass after 2005 AD. Parameters controlling surface melt dominate the model response to temperature change. After culling the ensemble to include only members that give reasonable ice volumes in 2005 AD, the range of projected sea level rise values in 2100 AD is ~40 % or more of the median. Data on past ice sheet behavior can help reduce this uncertainty, but none of our ensemble members produces a reasonable ice volume change during the mid-Holocene, relative to the present. This problem suggests that the model's exponential relation between temperature and precipitation does not hold during the Holocene, or that the central-Greenland temperature forcing curve used to drive the model is not representative of conditions around the ice margin at this time (among other possibilities. Our simulations also lack certain observed physical processes that may tend to enhance the real ice sheet's response. Regardless, this work has implications for other studies that use ice sheet models to project or hindcast the behavior of the Greenland Ice

  2. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  3. MODELKEY. Models for assessing and forecasting the impact of environmental key pollutants on freshwater and marine ecosystems and biodiversity.

    Science.gov (United States)

    Brack, Werner; Bakker, Joop; de Deckere, Eric; Deerenberg, Charlotte; van Gils, Jos; Hein, Michaela; Jurajda, Pavel; Kooijman, Bas; Lamoree, Marja; Lek, Sovan; López de Alda, Maria Jose; Marcomini, Antonio; Muñoz, Isabel; Rattei, Silke; Segner, Helmut; Thomas, Kevin; von der Ohe, Peter Carsten; Westrich, Bernhard; de Zwart, Dick; Schmitt-Jansen, Mechthild

    2005-09-01

    Triggered by the requirement of Water Framework Directive for a good ecological status for European river systems till 2015 and by still existing lacks in tools for cause identification of insufficient ecological status MODELKEY (http:// www.modelkey.org), an Integrated Project with 26 partners from 14 European countries, was started in 2005. MODELKEY is the acronym for 'Models for assessing and forecasting the impact of environmental key pollutants on freshwater and marine ecosystems and biodiversity'. The project is funded by the European Commission within the Sixth Framework Programme. MODELKEY comprises a multidisciplinary approach aiming at developing interlinked tools for an enhanced understanding of cause-effect-relationships between insufficient ecological status and environmental pollution as causative factor and for the assessment and forecasting of the risks of key pollutants on fresh water and marine ecosystems at a river basin and adjacent marine environment scale. New modelling tools for risk assessment including generic exposure assessment models, mechanistic models of toxic effects in simplified food chains, integrated diagnostic effect models based on community patterns, predictive component effect models applying artificial neural networks and GIS-based analysis of integrated risk indexes will be developed and linked to a user-friendly decision support system for the prioritisation of risks, contamination sources and contaminated sites. Modelling will be closely interlinked with extensive laboratory and field investigations. Early warning strategies on the basis of sub-lethal effects in vitro and in vivo are provided and combined with fractionation and analytical tools for effect-directed analysis of key toxicants. Integrated assessment of exposure and effects on biofilms, invertebrate and fish communities linking chemical analysis in water, sediment and biota with in vitro, in vivo and community level effect analysis is designed to provide data

  4. Feature-opinion pair identification of product reviews in Chinese: a domain ontology modeling method

    Science.gov (United States)

    Yin, Pei; Wang, Hongwei; Guo, Kaiqiang

    2013-03-01

    With the emergence of the new economy based on social media, a great amount of consumer feedback on particular products are conveyed through wide-spreading product online reviews, making opinion mining a growing interest for both academia and industry. According to the characteristic mode of expression in Chinese, this research proposes an ontology-based linguistic model to identify the basic appraisal expression in Chinese product reviews-"feature-opinion pair (FOP)." The product-oriented domain ontology is constructed automatically at first, then algorithms to identify FOP are designed by mapping product features and opinions to the conceptual space of the domain ontology, and finally comparative experiments are conducted to evaluate the model. Experimental results indicate that the performance of the proposed approach in this paper is efficient in obtaining a more accurate result compared to the state-of-art algorithms. Furthermore, through identifying and analyzing FOPs, the unstructured product reviews are converted into structured and machine-sensible expression, which provides valuable information for business application. This paper contributes to the related research in opinion mining by developing a solid foundation for further sentiment analysis at a fine-grained level and proposing a general way for automatic ontology construction.

  5. Modeling the temporal dynamics of distinctive feature landmark detectors for speech recognition.

    Science.gov (United States)

    Jansen, Aren; Niyogi, Partha

    2008-09-01

    This paper elaborates on a computational model for speech recognition that is inspired by several interrelated strands of research in phonology, acoustic phonetics, speech perception, and neuroscience. The goals are twofold: (i) to explore frameworks for recognition that may provide a viable alternative to the current hidden Markov model (HMM) based speech recognition systems and (ii) to provide a computational platform that will facilitate engaging, quantifying, and testing various theories in the scientific traditions in phonetics, psychology, and neuroscience. This motivation leads to an approach that constructs a hierarchically structured point process representation based on distinctive feature landmark detectors and probabilistically integrates the firing patterns of these detectors to decode a phonological sequence. The accuracy of a broad class recognizer based on this framework is competitive with equivalent HMM-based systems. Various avenues for future development of the presented methodology are outlined.

  6. Estimating Fallout Building Attributes from Architectural Features and Global Earthquake Model (GEM) Building Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Staci R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-01

    A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. These methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.

  7. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    Science.gov (United States)

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  8. models of hourly dry bulb temperature and relative humidity of key

    African Journals Online (AJOL)

    user

    use these models as inputs in computer programs for simulation of refrigerator, air conditioning systems and internal combustion engines operating anywhere in Nigeria. Keywords: Dry bulb temperature, Relative humidity, Air conditioning systems, Models, Fourier series. 1. INTRODUCTION. Nigeria is a tropical country in ...

  9. Adaptive Atmospheric Modeling Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications

    CERN Document Server

    Behrens, Jörn

    2006-01-01

    Gives an overview and guidance in the development of adaptive techniques for atmospheric modeling. This book covers paradigms of adaptive techniques, such as error estimation and adaptation criteria. Considering applications, it demonstrates several techniques for discretizing relevant conservation laws from atmospheric modeling.

  10. Emporium Model: The Key to Content Retention in Secondary Math Courses

    Science.gov (United States)

    Wilder, Sandra; Berry, Lisa

    2016-01-01

    The math emporium model was first developed by Virginia Tech in 1999. In the emporium model students use computer-based learning resources, engage in active learning, and work toward mastery of concepts. This approach to teaching and learning mathematics was piloted in a rural STEM high school. The purpose of this experimental study was to compare…

  11. models of hourly dry bulb temperature and relative humidity of key ...

    African Journals Online (AJOL)

    user

    use these models as inputs in computer programs for simulation of refrigerator, air conditioning systems and internal combustion engines operating anywhere in Nigeria. Keywords: Dry bulb temperature, Relative humidity, Air conditioning systems, Models, Fourier series. 1. INTRODUCTION. Nigeria is a tropical country in ...

  12. Computational intelligence models to predict porosity of tablets using minimum features

    Directory of Open Access Journals (Sweden)

    Khalid MH

    2017-01-01

    behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%. Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. Keywords: computational intelligence, artificial neural network, symbolic regression, feature selection, die compaction, porosity

  13. Key features of INTOR nuclear systems

    International Nuclear Information System (INIS)

    Abdou, M.A.

    1981-05-01

    The conceptual design effort for INTOR was broadly defined in three areas: (1) Plasma Physics, (2) Engineering, and (3) Nuclear Systems. This paper is devoted to a summary of the Nuclear Systems effort. The emphasis is placed on the First Wall, Breeding Blanket, and Divertor

  14. Key features of Sizewell B type

    International Nuclear Information System (INIS)

    Tait, B.I.

    1987-01-01

    The British Central Electricity Generating Board (CEGB) has deciced to build the first British nuclear power plant to be equipped with a pressurized water reactor and has already placed a number of contracts for Sizewell B. CEBG in this case acts as architect engineer to many important nuclear firms, including some non-British firms, as subcontractors. The licensee in Britain for the Westinghouse type of reactor underlying the plant design is the National Nuclear Corp. (NNC). Westinghouse and NNC have created a joint venture, PWR Power Projects, which will be the licensee of NNC and vendor of British PWR plants or primary parts thereof. This applies to various unit powers to be offered in the United Kingdom and abroad. NNC and Westinghouse will make available their experience to the British PWR company. (orig.) [de

  15. Data warehouse model for monitoring key performance indicators (KPIs) using goal oriented approach

    Science.gov (United States)

    Abdullah, Mohammed Thajeel; Ta'a, Azman; Bakar, Muhamad Shahbani Abu

    2016-08-01

    The growth and development of universities, just as other organizations, depend on their abilities to strategically plan and implement development blueprints which are in line with their vision and mission statements. The actualizations of these statements, which are often designed into goals and sub-goals and linked to their respective actors are better measured by defining key performance indicators (KPIs) of the university. The proposes ReGADaK, which is an extended the GRAnD approach highlights the facts, dimensions, attributes, measures and KPIs of the organization. The measures from the goal analysis of this unit serve as the basis of developing the related university's KPIs. The proposed data warehouse schema is evaluated through expert review, prototyping and usability evaluation. The findings from the evaluation processes suggest that the proposed data warehouse schema is suitable for monitoring the University's KPIs.

  16. Computational intelligence models to predict porosity of tablets using minimum features.

    Science.gov (United States)

    Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander

    2017-01-01

    The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.

  17. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  18. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek; P. Rogers

    2004-10-27

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of biosphere features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for the license application (LA). A screening decision, either ''Included'' or ''Excluded'', is given for each FEP along with the corresponding technical basis for the excluded FEPs and the descriptions of how the included FEPs were incorporated in the biosphere model. This information is required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The FEPs addressed in this report concern characteristics of the reference biosphere, the receptor, and the environmental transport and receptor exposure pathways for the groundwater and volcanic ash exposure scenarios considered in biosphere modeling. This revision provides the summary of the implementation of included FEPs in TSPA-LA, (i.e., how the FEP is included); for excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded). This report is one of the 10 documents constituting the biosphere model documentation suite. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling. The ''Biosphere Model Report'' describes in detail the biosphere conceptual model and mathematical model. The input parameter reports shown to the right of the ''Biosphere Model Report'' contain detailed descriptions of the model input parameters and their development. Outputs from these six reports are used in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis and Disruptive Event Biosphere Dose Conversion Factor Analysis

  19. MULTI-SOURCE HIERARCHICAL CONDITIONAL RANDOM FIELD MODEL FOR FEATURE FUSION OF REMOTE SENSING IMAGES AND LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2013-05-01

    Full Text Available Feature fusion of remote sensing images and LiDAR points cloud data, which have strong complementarity, can effectively play the advantages of multi-class features to provide more reliable information support for the remote sensing applications, such as object classification and recognition. In this paper, we introduce a novel multi-source hierarchical conditional random field (MSHCRF model to fuse features extracted from remote sensing images and LiDAR data for image classification. Firstly, typical features are selected to obtain the interest regions from multi-source data, then MSHCRF model is constructed to exploit up the features, category compatibility of images and the category consistency of multi-source data based on the regions, and the outputs of the model represents the optimal results of the image classification. Competitive results demonstrate the precision and robustness of the proposed method.

  20. Replication and extension of a hierarchical model of social anxiety and depression: fear of positive evaluation as a key unique factor in social anxiety.

    Science.gov (United States)

    Weeks, Justin W

    2015-01-01

    Wang, Hsu, Chiu, and Liang (2012, Journal of Anxiety Disorders, 26, 215-224) recently proposed a hierarchical model of social interaction anxiety and depression to account for both the commonalities and distinctions between these conditions. In the present paper, this model was extended to more broadly encompass the symptoms of social anxiety disorder, and replicated in a large unselected, undergraduate sample (n = 585). Structural equation modeling (SEM) and hierarchical regression analyses were employed. Negative affect and positive affect were conceptualized as general factors shared by social anxiety and depression; fear of negative evaluation (FNE) and disqualification of positive social outcomes were operationalized as specific factors, and fear of positive evaluation (FPE) was operationalized as a factor unique to social anxiety. This extended hierarchical model explicates structural relationships among these factors, in which the higher-level, general factors (i.e., high negative affect and low positive affect) represent vulnerability markers of both social anxiety and depression, and the lower-level factors (i.e., FNE, disqualification of positive social outcomes, and FPE) are the dimensions of specific cognitive features. Results from SEM and hierarchical regression analyses converged in support of the extended model. FPE is further supported as a key symptom that differentiates social anxiety from depression.

  1. Key West, Florida 1/3 Arc-second MHW Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  2. Key West, Florida 1/3 Arc-second NAVD 88 Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  3. Object class recognition based on compressive sensing with sparse features inspired by hierarchical model in visual cortex

    Science.gov (United States)

    Lu, Pei; Xu, Zhiyong; Yu, Huapeng; Chang, Yongxin; Fu, Chengyu; Shao, Jianxin

    2012-11-01

    According to models of object recognition in cortex, the brain uses a hierarchical approach in which simple, low-level features having high position and scale specificity are pooled and combined into more complex, higher-level features having greater location invariance. At higher levels, spatial structure becomes implicitly encoded into the features themselves, which may overlap, while explicit spatial information is coded more coarsely. In this paper, the importance of sparsity and localized patch features in a hierarchical model inspired by visual cortex is investigated. As in the model of Serre, Wolf, and Poggio, we first apply Gabor filters at all positions and scales; feature complexity and position/scale invariance are then built up by alternating template matching and max pooling operations. In order to improve generalization performance, the sparsity is proposed and data dimension is reduced by means of compressive sensing theory and sparse representation algorithm. Similarly, within computational neuroscience, adding the sparsity on the number of feature inputs and feature selection is critical for learning biologically model from the statistics of natural images. Then, a redundancy dictionary of patch-based features that could distinguish object class from other categories is designed and then object recognition is implemented by the process of iterative optimization. The method is test on the UIUC car database. The success of this approach suggests a proof for the object class recognition in visual cortex.

  4. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  5. Climatic features of the Red Sea from a regional assimilative model

    KAUST Repository

    Viswanadhapalli, Yesubabu

    2016-08-16

    The Advanced Research version of Weather Research and Forecasting (WRF-ARW) model was used to generate a downscaled, 10-km resolution regional climate dataset over the Red Sea and adjacent region. The model simulations are performed based on two, two-way nested domains of 30- and 10-km resolutions assimilating all conventional observations using a cyclic three-dimensional variational approach over an initial 12-h period. The improved initial conditions are then used to generate regional climate products for the following 24 h. We combined the resulting daily 24-h datasets to construct a 15-year Red Sea atmospheric downscaled product from 2000 to 2014. This 15-year downscaled dataset is evaluated via comparisons with various in situ and gridded datasets. Our analysis indicates that the assimilated model successfully reproduced the spatial and temporal variability of temperature, wind, rainfall, relative humidity and sea level pressure over the Red Sea region. The model also efficiently simulated the seasonal and monthly variability of wind patterns, the Red Sea Convergence Zone and associated rainfall. Our results suggest that dynamical downscaling and assimilation of available observations improve the representation of regional atmospheric features over the Red Sea compared to global analysis data from the National Centers for Environmental Prediction. We use the dataset to describe the atmospheric climatic conditions over the Red Sea region. © 2016 Royal Meteorological Society.

  6. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    Science.gov (United States)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  7. CHARACTERISTIC FEATURES OF MUELLER MATRIX PATTERNS FOR POLARIZATION SCATTERING MODEL OF BIOLOGICAL TISSUES

    Directory of Open Access Journals (Sweden)

    E DU

    2014-01-01

    Full Text Available We developed a model to describe polarized photon scattering in biological tissues. In this model, tissues are simplified to a mixture of scatterers and surrounding medium. There are two types of scatterers in the model: solid spheres and infinitely long solid cylinders. Variables related to the scatterers include: the densities and sizes of the spheres and cylinders, the orientation and angular distribution of cylinders. Variables related to the surrounding medium include: the refractive index, absorption coefficient and birefringence. In this paper, as a development we introduce an optical activity effect to the model. By comparing experiments and Monte Carlo simulations, we analyze the backscattering Mueller matrix patterns of several tissue-like media, and summarize the different effects coming from anisotropic scattering and optical properties. In addition, we propose a possible method to extract the optical activity values for tissues. Both the experimental and simulated results show that, by analyzing the Mueller matrix patterns, the microstructure and optical properties of the medium can be obtained. The characteristic features of Mueller matrix patterns are potentially powerful tools for studying the contrast mechanisms of polarization imaging for medical diagnosis.

  8. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  9. Animal Models of Diabetic Macrovascular Complications: Key Players in the Development of New Therapeutic Approaches

    Directory of Open Access Journals (Sweden)

    Suvi E. Heinonen

    2015-01-01

    Full Text Available Diabetes mellitus is a lifelong, incapacitating metabolic disease associated with chronic macrovascular complications (coronary heart disease, stroke, and peripheral vascular disease and microvascular disorders leading to damage of the kidneys (nephropathy and eyes (retinopathy. Based on the current trends, the rising prevalence of diabetes worldwide will lead to increased cardiovascular morbidity and mortality. Therefore, novel means to prevent and treat these complications are needed. Under the auspices of the IMI (Innovative Medicines Initiative, the SUMMIT (SUrrogate markers for Micro- and Macrovascular hard end points for Innovative diabetes Tools consortium is working on the development of novel animal models that better replicate vascular complications of diabetes and on the characterization of the available models. In the past years, with the high level of genomic information available and more advanced molecular tools, a very large number of models has been created. Selecting the right model for a specific study is not a trivial task and will have an impact on the study results and their interpretation. This review gathers information on the available experimental animal models of diabetic macrovascular complications and evaluates their pros and cons for research purposes as well as for drug development.

  10. Emporium Model: The Key to Content Retention in Secondary Math Courses

    Directory of Open Access Journals (Sweden)

    Sandra Wilder

    2016-07-01

    Full Text Available The math emporium model was first developed by Virginia Tech in 1999. In the emporium model students use computer-based learning resources, engage in active learning, and work toward mastery of concepts. This approach to teaching and learning mathematics was piloted in a rural STEM high school. The purpose of this experimental study was to compare the impact of the emporium model and the traditional approach to instruction on student achievement and retention of algebra. The results indicated that both approaches to instruction were equally effective in improving student mathematics knowledge. However, the findings revealed that the students in the emporium section had significantly higher retention of the content knowledge.

  11. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    Science.gov (United States)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual

  12. Physiologically-based toxicokinetic models help identifying the key factors affecting contaminant uptake during flood events

    Energy Technology Data Exchange (ETDEWEB)

    Brinkmann, Markus; Eichbaum, Kathrin [Department of Ecosystem Analysis, Institute for Environmental Research,ABBt – Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); Kammann, Ulrike [Thünen-Institute of Fisheries Ecology, Palmaille 9, 22767 Hamburg (Germany); Hudjetz, Sebastian [Department of Ecosystem Analysis, Institute for Environmental Research,ABBt – Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Cofalla, Catrina [Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Buchinger, Sebastian; Reifferscheid, Georg [Federal Institute of Hydrology (BFG), Department G3: Biochemistry, Ecotoxicology, Am Mainzer Tor 1, 56068 Koblenz (Germany); Schüttrumpf, Holger [Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Preuss, Thomas [Department of Environmental Biology and Chemodynamics, Institute for Environmental Research,ABBt- Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); and others

    2014-07-01

    Highlights: • A PBTK model for trout was coupled with a sediment equilibrium partitioning model. • The influence of physical exercise on pollutant uptake was studies using the model. • Physical exercise during flood events can increase the level of biliary metabolites. • Cardiac output and effective respiratory volume were identified as relevant factors. • These confounding factors need to be considered also for bioconcentration studies. - Abstract: As a consequence of global climate change, we will be likely facing an increasing frequency and intensity of flood events. Thus, the ecotoxicological relevance of sediment re-suspension is of growing concern. It is vital to understand contaminant uptake from suspended sediments and relate it to effects in aquatic biota. Here we report on a computational study that utilizes a physiologically based toxicokinetic model to predict uptake, metabolism and excretion of sediment-borne pyrene in rainbow trout (Oncorhynchus mykiss). To this end, data from two experimental studies were compared with the model predictions: (a) batch re-suspension experiments with constant concentration of suspended particulate matter at two different temperatures (12 and 24 °C), and (b) simulated flood events in an annular flume. The model predicted both the final concentrations and the kinetics of 1-hydroxypyrene secretion into the gall bladder of exposed rainbow trout well. We were able to show that exhaustive exercise during exposure in simulated flood events can lead to increased levels of biliary metabolites and identified cardiac output and effective respiratory volume as the two most important factors for contaminant uptake. The results of our study clearly demonstrate the relevance and the necessity to investigate uptake of contaminants from suspended sediments under realistic exposure scenarios.

  13. Physiologically-based toxicokinetic models help identifying the key factors affecting contaminant uptake during flood events

    International Nuclear Information System (INIS)

    Brinkmann, Markus; Eichbaum, Kathrin; Kammann, Ulrike; Hudjetz, Sebastian; Cofalla, Catrina; Buchinger, Sebastian; Reifferscheid, Georg; Schüttrumpf, Holger; Preuss, Thomas

    2014-01-01

    Highlights: • A PBTK model for trout was coupled with a sediment equilibrium partitioning model. • The influence of physical exercise on pollutant uptake was studies using the model. • Physical exercise during flood events can increase the level of biliary metabolites. • Cardiac output and effective respiratory volume were identified as relevant factors. • These confounding factors need to be considered also for bioconcentration studies. - Abstract: As a consequence of global climate change, we will be likely facing an increasing frequency and intensity of flood events. Thus, the ecotoxicological relevance of sediment re-suspension is of growing concern. It is vital to understand contaminant uptake from suspended sediments and relate it to effects in aquatic biota. Here we report on a computational study that utilizes a physiologically based toxicokinetic model to predict uptake, metabolism and excretion of sediment-borne pyrene in rainbow trout (Oncorhynchus mykiss). To this end, data from two experimental studies were compared with the model predictions: (a) batch re-suspension experiments with constant concentration of suspended particulate matter at two different temperatures (12 and 24 °C), and (b) simulated flood events in an annular flume. The model predicted both the final concentrations and the kinetics of 1-hydroxypyrene secretion into the gall bladder of exposed rainbow trout well. We were able to show that exhaustive exercise during exposure in simulated flood events can lead to increased levels of biliary metabolites and identified cardiac output and effective respiratory volume as the two most important factors for contaminant uptake. The results of our study clearly demonstrate the relevance and the necessity to investigate uptake of contaminants from suspended sediments under realistic exposure scenarios

  14. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2017-12-01

    Full Text Available Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE-based deep neural networks (DNNs to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs and backpropagation neural networks (BPNNs.

  15. Combining sigma-lognormal modeling and classical features for analyzing graphomotor performances in kindergarten children.

    Science.gov (United States)

    Duval, Thérésa; Rémi, Céline; Plamondon, Réjean; Vaillant, Jean; O'Reilly, Christian

    2015-10-01

    This paper investigates the advantage of using the kinematic theory of rapid human movements as a complementary approach to those based on classical dynamical features to characterize and analyze kindergarten children's ability to engage in graphomotor activities as a preparation for handwriting learning. This study analyzes nine different movements taken from 48 children evenly distributed among three different school grades corresponding to pupils aged 3, 4, and 5 years. On the one hand, our results show that the ability to perform graphomotor activities depends on kindergarten grades. More importantly, this study shows which performance criteria, from sophisticated neuromotor modeling as well as more classical kinematic parameters, can differentiate children of different school grades. These criteria provide a valuable tool for studying children's graphomotor control learning strategies. On the other hand, from a practical point of view, it is observed that school grades do not clearly reflect pupils' graphomotor performances. This calls for a large-scale investigation, using a more efficient experimental design based on the various observations made throughout this study regarding the choice of the graphic shapes, the number of repetitions and the features to analyze. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Green Infrastructure Design Based on Spatial Conservation Prioritization and Modeling of Biodiversity Features and Ecosystem Services.

    Science.gov (United States)

    Snäll, Tord; Lehtomäki, Joona; Arponen, Anni; Elith, Jane; Moilanen, Atte

    2016-02-01

    There is high-level political support for the use of green infrastructure (GI) across Europe, to maintain viable populations and to provide ecosystem services (ES). Even though GI is inherently a spatial concept, the modern tools for spatial planning have not been recognized, such as in the recent European Environment Agency (EEA) report. We outline a toolbox of methods useful for GI design that explicitly accounts for biodiversity and ES. Data on species occurrence, habitats, and environmental variables are increasingly available via open-access internet platforms. Such data can be synthesized by statistical species distribution modeling, producing maps of biodiversity features. These, together with maps of ES, can form the basis for GI design. We argue that spatial conservation prioritization (SCP) methods are effective tools for GI design, as the overall SCP goal is cost-effective allocation of conservation efforts. Corridors are currently promoted by the EEA as the means for implementing GI design, but they typically target the needs of only a subset of the regional species pool. SCP methods would help to ensure that GI provides a balanced solution for the requirements of many biodiversity features (e.g., species, habitat types) and ES simultaneously in a cost-effective manner. Such tools are necessary to make GI into an operational concept for combating biodiversity loss and promoting ES.

  17. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis.

    Science.gov (United States)

    Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon

    2017-12-11

    Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs).

  18. Features and analyses of W7-X cryostat system FE model

    Energy Technology Data Exchange (ETDEWEB)

    Eeten, Paul van, E-mail: paul.van.eeten@ipp.mpg.de; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-10-15

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  19. Vascular dynamics aid a coupled neurovascular network learn sparse independent features: A computational model

    Directory of Open Access Journals (Sweden)

    Ryan Thomas Philips

    2016-02-01

    Full Text Available Cerebral vascular dynamics are generally thought to be controlled by neural activity in a unidirectional fashion. However, both computational modeling and experimental evidence point to the feedback effects of vascular dynamics on neural activity. Vascular feedback in the form of glucose and oxygen controls neuronal ATP, either directly or via the agency of astrocytes, which in turn modulates neural firing. Recently, a detailed model of the neuron-astrocyte-vessel system has shown how vasomotion can modulate neural firing. Similarly, arguing from known cerebrovascular physiology, an approach known as `hemoneural hypothesis' postulates functional modulation of neural activity by vascular feedback. To instantiate this perspective, we present a computational model in which a network of `vascular units' supplies energy to a neural network. The complex dynamics of the vascular network, modeled by a network of oscillators, turns neurons ON and OFF randomly. The informational consequence of such dynamics is explored in the context of an auto-encoder network. In the proposed model, each vascular unit supplies energy to a subset of hidden neurons of an autoencoder network, which constitutes its `projective field'. Neurons that receive adequate energy in a given trial have reduced threshold, and thus are prone to fire. Dynamics of the vascular network are governed by changes in the reconstruction error of the auto-encoder network, interpreted as the neuronal demand. Vascular feedback causes random inactivation of a subset of hidden neurons in every trial. We observe that, under conditions of desynchronized vascular dynamics, the output reconstruction error is low and the feature vectors learnt are sparse and independent. Our earlier modeling study highlighted the link between desynchronized vascular dynamics and efficient energy delivery in skeletal muscle. We now show that desynchronized vascular dynamics leads to efficient training in an auto

  20. Characterizing structural features of cuticle-degrading proteases from fungi by molecular modeling

    Directory of Open Access Journals (Sweden)

    Fu Yun-Xin

    2007-05-01

    Full Text Available Abstract Background Serine proteases secreted by nematode and insect pathogenic fungi are bio-control agents which have commercial potential for developing into effective bio-pesticides. A thorough understanding of the structural and functional features of these proteases would significantly assist with targeting the design of efficient bio-control agents. Results Structural models of serine proteases PR1 from entomophagous fungus, Ver112 and VCP1 from nematophagous fungi, have been modeled using the homology modeling technique based on the crystal coordinate of the proteinase K. In combination with multiple sequence alignment, these models suggest one similar calcium-binding site and two common disulfide bridges in the three cuticle-degrading enzymes. In addition, the predicted models of the three cuticle-degrading enzymes present an essentially identical backbone topology and similar geometric properties with the exception of a limited number of sites exhibiting relatively large local conformational differences only in some surface loops and the N-, C termini. However, they differ from each other in the electrostatic surface potential, in hydrophobicity and size of the S4 substrate-binding pocket, and in the number and distribution of hydrogen bonds and salt bridges within regions that are part of or in close proximity to the S2-loop. Conclusion These differences likely lead to variations in substrate specificity and catalytic efficiency among the three enzymes. Amino acid polymorphisms in cuticle-degrading enzymes were discussed with respect to functional effects and host preference. It is hoped that these structural models would provide a further basis for exploitation of these serine proteases from pathogenic fungi as effective bio-control agents.

  1. Vascular Dynamics Aid a Coupled Neurovascular Network Learn Sparse Independent Features: A Computational Model.

    Science.gov (United States)

    Philips, Ryan T; Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    Cerebral vascular dynamics are generally thought to be controlled by neural activity in a unidirectional fashion. However, both computational modeling and experimental evidence point to the feedback effects of vascular dynamics on neural activity. Vascular feedback in the form of glucose and oxygen controls neuronal ATP, either directly or via the agency of astrocytes, which in turn modulates neural firing. Recently, a detailed model of the neuron-astrocyte-vessel system has shown how vasomotion can modulate neural firing. Similarly, arguing from known cerebrovascular physiology, an approach known as "hemoneural hypothesis" postulates functional modulation of neural activity by vascular feedback. To instantiate this perspective, we present a computational model in which a network of "vascular units" supplies energy to a neural network. The complex dynamics of the vascular network, modeled by a network of oscillators, turns neurons ON and OFF randomly. The informational consequence of such dynamics is explored in the context of an auto-encoder network. In the proposed model, each vascular unit supplies energy to a subset of hidden neurons of an autoencoder network, which constitutes its "projective field." Neurons that receive adequate energy in a given trial have reduced threshold, and thus are prone to fire. Dynamics of the vascular network are governed by changes in the reconstruction error of the auto-encoder network, interpreted as the neuronal demand. Vascular feedback causes random inactivation of a subset of hidden neurons in every trial. We observe that, under conditions of desynchronized vascular dynamics, the output reconstruction error is low and the feature vectors learnt are sparse and independent. Our earlier modeling study highlighted the link between desynchronized vascular dynamics and efficient energy delivery in skeletal muscle. We now show that desynchronized vascular dynamics leads to efficient training in an auto-encoder neural network.

  2. Features and analyses of W7-X cryostat system FE model

    International Nuclear Information System (INIS)

    Eeten, Paul van; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-01-01

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  3. Modeling, Simulation, and Analysis of a Decoy State Enabled Quantum Key Distribution System

    Science.gov (United States)

    2015-03-26

    Protecting Information, New York: Cambridge University Press, 2006. [6] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information...configurable to interfere with Bob’s ability to detect a weak coherent pulse. DR D 5 The QKD model shall be accurate, flexible, usable , extensible

  4. Modeling information flows in clinical decision support: key insights for enhancing system effectiveness

    NARCIS (Netherlands)

    Medlock, Stephanie; Wyatt, Jeremy C.; Patel, Vimla L.; Shortliffe, Edward H.; Abu-Hanna, Ameen

    2016-01-01

    A fundamental challenge in the field of clinical decision support is to determine what characteristics of systems make them effective in supporting particular types of clinical decisions. However, we lack such a theory of decision support itself and a model to describe clinical decisions and the

  5. Evaluating predictive models for solar energy growth in the US states and identifying the key drivers

    Science.gov (United States)

    Chakraborty, Joheen; Banerji, Sugata

    2018-03-01

    Driven by a desire to control climate change and reduce the dependence on fossil fuels, governments around the world are increasing the adoption of renewable energy sources. However, among the US states, we observe a wide disparity in renewable penetration. In this study, we have identified and cleaned over a dozen datasets representing solar energy penetration in each US state, and the potentially relevant socioeconomic and other factors that may be driving the growth in solar. We have applied a number of predictive modeling approaches - including machine learning and regression - on these datasets over a 17-year period and evaluated the relative performance of the models. Our goals were: (1) identify the most important factors that are driving the growth in solar, (2) choose the most effective predictive modeling technique for solar growth, and (3) develop a model for predicting next year’s solar growth using this year’s data. We obtained very promising results with random forests (about 90% efficacy) and varying degrees of success with support vector machines and regression techniques (linear, polynomial, ridge). We also identified states with solar growth slower than expected and representing a potential for stronger growth in future.

  6. Integrating semantics and procedural generation: key enabling factors for declarative modeling of virtual worlds

    NARCIS (Netherlands)

    Bidarra, R.; Kraker, K.J. de; Smelik, R.M.; Tutenel, T.

    2010-01-01

    Manual content creation for virtual worlds can no longer satisfy the increasing demand arising from areas as entertainment and serious games, simulations, movies, etc. Furthermore, currently deployed modeling tools basically do not scale up: while they become more and more specialized and complex,

  7. Modeling succession of key resource-harvesting traits of mixotrophic plankton

    DEFF Research Database (Denmark)

    Berge, Terje; Chakraborty, Subhendu; Hansen, Per Juel

    2017-01-01

    building blocks for growth, the model reproduces the observed light-dependent ingestion rates and species-specific growth rates with and without prey from the laboratory. The combination of traits yielding the highest growth rate suggests high investments in photosynthesis, and inorganic nutrient uptake...

  8. Specific features of modelling rules of monetary policy on the basis of hybrid regression models with a neural component

    Directory of Open Access Journals (Sweden)

    Lukianenko Iryna H.

    2014-01-01

    Full Text Available The article considers possibilities and specific features of modelling economic phenomena with the help of the category of models that unite elements of econometric regressions and artificial neural networks. This category of models contains auto-regression neural networks (AR-NN, regressions of smooth transition (STR/STAR, multi-mode regressions of smooth transition (MRSTR/MRSTAR and smooth transition regressions with neural coefficients (NCSTR/NCSTAR. Availability of the neural network component allows models of this category achievement of a high empirical authenticity, including reproduction of complex non-linear interrelations. On the other hand, the regression mechanism expands possibilities of interpretation of the obtained results. An example of multi-mode monetary rule is used to show one of the cases of specification and interpretation of this model. In particular, the article models and interprets principles of management of the UAH exchange rate that come into force when economy passes from a relatively stable into a crisis state.

  9. Key issues for the development and application of the species sensitivity distribution (SSD) model for ecological risk assessment

    DEFF Research Database (Denmark)

    Xu, Fu-Liu; Li, Yi-Long; Wang, Yin

    2015-01-01

    The species sensitivity distribution (SSD) model is one of the most commonly used methods for ecological risk assessment based on the potentially affected fraction (PAF) of and the combined PAF (msPAF) as quantitative indicators. There are usually four steps for the development of SSD models...... fractions (msPAFs) for the joint ecological risk assessment of multiple pollutants. Among the above mentioned four steps, the first two steps are paramount. In the present study, the following six key issues are discussed: (1) how to select the appropriate species, (2) how to preprocess the toxicity data...... for invertebrates. The concentration addition or response addition were discussed to calculate msPAF according to the toxic model of action (TMoA). The uncertainties of the SSD models for five heavy metals and for eight polycyclic aromatic hydrocarbons (PAHs) were performed. The comparison of the coefficients...

  10. Scenario-Led Habitat Modelling of Land Use Change Impacts on Key Species.

    Directory of Open Access Journals (Sweden)

    Matthew Geary

    Full Text Available Accurate predictions of the impacts of future land use change on species of conservation concern can help to inform policy-makers and improve conservation measures. If predictions are spatially explicit, predicted consequences of likely land use changes could be accessible to land managers at a scale relevant to their working landscape. We introduce a method, based on open source software, which integrates habitat suitability modelling with scenario-building, and illustrate its use by investigating the effects of alternative land use change scenarios on landscape suitability for black grouse Tetrao tetrix. Expert opinion was used to construct five near-future (twenty years scenarios for the 800 km2 study site in upland Scotland. For each scenario, the cover of different land use types was altered by 5-30% from 20 random starting locations and changes in habitat suitability assessed by projecting a MaxEnt suitability model onto each simulated landscape. A scenario converting grazed land to moorland and open forestry was the most beneficial for black grouse, and 'increased grazing' (the opposite conversion the most detrimental. Positioning of new landscape blocks was shown to be important in some situations. Increasing the area of open-canopy forestry caused a proportional decrease in suitability, but suitability gains for the 'reduced grazing' scenario were nonlinear. 'Scenario-led' landscape simulation models can be applied in assessments of the impacts of land use change both on individual species and also on diversity and community measures, or ecosystem services. A next step would be to include landscape configuration more explicitly in the simulation models, both to make them more realistic, and to examine the effects of habitat placement more thoroughly. In this example, the recommended policy would be incentives on grazing reduction to benefit black grouse.

  11. A lock-and-key model for protein–protein interactions

    OpenAIRE

    Morrison, Julie L.; Breitling, Rainer; Higham, Desmond J.; Gilbert, David R.

    2006-01-01

    Motivation: Protein–protein interaction networks are one of the major post-genomic data sources available to molecular biologists. They provide a comprehensive view of the global interaction structure of an organism’s proteome, as well as detailed information on specific interactions. Here we suggest a physical model of protein interactions that can be used to extract additional information at an intermediate level: It enables us to identify proteins which share biological interaction motifs,...

  12. Scenario-Led Habitat Modelling of Land Use Change Impacts on Key Species.

    Science.gov (United States)

    Geary, Matthew; Fielding, Alan H; McGowan, Philip J K; Marsden, Stuart J

    2015-01-01

    Accurate predictions of the impacts of future land use change on species of conservation concern can help to inform policy-makers and improve conservation measures. If predictions are spatially explicit, predicted consequences of likely land use changes could be accessible to land managers at a scale relevant to their working landscape. We introduce a method, based on open source software, which integrates habitat suitability modelling with scenario-building, and illustrate its use by investigating the effects of alternative land use change scenarios on landscape suitability for black grouse Tetrao tetrix. Expert opinion was used to construct five near-future (twenty years) scenarios for the 800 km2 study site in upland Scotland. For each scenario, the cover of different land use types was altered by 5-30% from 20 random starting locations and changes in habitat suitability assessed by projecting a MaxEnt suitability model onto each simulated landscape. A scenario converting grazed land to moorland and open forestry was the most beneficial for black grouse, and 'increased grazing' (the opposite conversion) the most detrimental. Positioning of new landscape blocks was shown to be important in some situations. Increasing the area of open-canopy forestry caused a proportional decrease in suitability, but suitability gains for the 'reduced grazing' scenario were nonlinear. 'Scenario-led' landscape simulation models can be applied in assessments of the impacts of land use change both on individual species and also on diversity and community measures, or ecosystem services. A next step would be to include landscape configuration more explicitly in the simulation models, both to make them more realistic, and to examine the effects of habitat placement more thoroughly. In this example, the recommended policy would be incentives on grazing reduction to benefit black grouse.

  13. A parsimonious, integrative model of key psychological correlates of UK university students' alcohol consumption.

    Science.gov (United States)

    Atwell, Katie; Abraham, Charles; Duka, Theodora

    2011-01-01

    To examine the predictive utility of psychological correlates of alcohol consumption identified in previous (US-dominated) research for a UK student sample and construct an integrative model predictive of alcohol dependency in a sample of first-year undergraduate students. A self-report questionnaire completed by 230 students measured stable and modifiable correlates of alcohol dependence. Stable correlates included age when first regularly drinking (age of onset), personality traits and religiosity. Modifiable measures included drinking motives, self-efficacy, alcohol-related expectancies, prototype perceptions and normative beliefs. The final multivariate model highlighted the importance of age of onset, sensation-seeking and a series of social cognitive measures including: social drinking motives, confidence in the ability to drink within government guidelines (self-efficacy) and the perceived quantity and frequency of alcohol consumed by university friends. Beta-coefficients indicated that self-efficacy and social drinking motives were particularly important predictors. A significant interaction was observed between age of onset and self-efficacy. Earlier onset was associated with higher levels of alcohol dependence for low and moderate, but not high levels of self-efficacy. The model presented here could be used to identify students at risk of alcohol dependence and inform the design of campus-based interventions.

  14. Epidemiological Implications of Host Biodiversity and Vector Biology: Key Insights from Simple Models.

    Science.gov (United States)

    Dobson, Andrew D M; Auld, Stuart K J R

    2016-04-01

    Models used to investigate the relationship between biodiversity change and vector-borne disease risk often do not explicitly include the vector; they instead rely on a frequency-dependent transmission function to represent vector dynamics. However, differences between classes of vector (e.g., ticks and insects) can cause discrepancies in epidemiological responses to environmental change. Using a pair of disease models (mosquito- and tick-borne), we simulated substitutive and additive biodiversity change (where noncompetent hosts replaced or were added to competent hosts, respectively), while considering different relationships between vector and host densities. We found important differences between classes of vector, including an increased likelihood of amplified disease risk under additive biodiversity change in mosquito models, driven by higher vector biting rates. We also draw attention to more general phenomena, such as a negative relationship between initial infection prevalence in vectors and likelihood of dilution, and the potential for a rise in density of infected vectors to occur simultaneously with a decline in proportion of infected hosts. This has important implications; the density of infected vectors is the most valid metric for primarily zoonotic infections, while the proportion of infected hosts is more relevant for infections where humans are a primary host.

  15. A Hybrid Network Model to Extract Key Criteria and Its Application for Brand Equity Evaluation

    Directory of Open Access Journals (Sweden)

    Chin-Yi Chen

    2012-01-01

    Full Text Available Making a decision implies that there are alternative choices to be considered, and a major challenge of decision-making is to identify the adequate criteria for program planning or problem evaluation. The decision-makers’ criteria consists of the characteristics or requirements each alternative must possess and the alternatives are rated on how well they possess each criterion. We often use criteria developed and used by different researchers and institutions, and these criteria have similar means and can be substituted for one another. Choosing from existing criteria offers a practical method to engineers hoping to derive a set of criteria for evaluating objects or programs. We have developed a hybrid model for extracting evaluation criteria which considers substitutions between the criteria. The model is developed based on Social Network Analysis and Maximum Mean De-Entropy algorithms. In this paper, the introduced methodology will also be applied to analyze the criteria for assessing brand equity as an application example. The proposed model demonstrates that it is useful in planning feasibility criteria and has applications in other evaluation-planning purposes.

  16. Exploring Secondary Students' Epistemological Features Depending on the Evaluation Levels of the Group Model on Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kim, Heui-Baik

    2014-01-01

    The purpose of this study is to identify the epistemological features and model qualities depending on model evaluation levels and to explore the reasoning process behind high-level evaluation through small group interaction about blood circulation. Nine groups of three to four students in the eighth grade participated in the modeling practice.…

  17. Model features as the basis of preparation of boxers individualization principal level (elite

    Directory of Open Access Journals (Sweden)

    O.J. Pavelec

    2013-10-01

    Full Text Available Purpose to improve the system of training boxers of higher categories (elite. Individualization of the training process using the model characteristics special physical preparedness. Materials : The study was conducted during 2000-2010. Participated boxers national team of Ukraine in the amount of 43 people. Of those honored masters of sport 6, masters of sports of international class 16, masters of sports 21. The average age of the athletes 23.5 years. Results : justified and features a specially designed model of physical fitness boxing class. It is established that the boxers middle weight classes (64 75 kg have an advantage over other boxers weight categories (light and after a hard in the development of speed and strength endurance. The presented model characteristics can guide the professional fitness boxing (elite, as representatives of the sport. Conclusions : It is established that the structure of the special physical training boxers depends on many components, such as weight category, tactical fighter role, skill level, stage of preparation.

  18. [Attributes and features of a community health model from the perspective of practitioners].

    Science.gov (United States)

    Dois, Angelina; Bravo, Paulina; Soto, Gabriela

    2017-07-01

    The Family and Community Health Model is based on three essential principles: user-centered care, comprehensive care and continuity of care. To describe the attributes and characteristics of the guiding principles of the Family and Community Health Model (FHM) from the perspective of primary care experts. This was a qualitative study. An electronic Delphi was conducted with 29 national experts on primary care. The experts agree that user centered care must be based on a psycho-social model integrating the multiple factors that influence health problems. It also must integrate patients' individual features, family and environmental issues. The proposed actions promote shared decision making. To promote integral care, anticipatory guidelines should be expanded and health care of patients with chronic conditions should be improved. Continuity of care should be promoted increasing working hours of medical centers and easing access to integrated electronic medical records, thereby generating efficient links between the different care levels. The results of the study can guide the clinical and administrative management of health teams, allowing the strengthening of primary health care according to the local realities.

  19. Perception testing: a key component in modeling and simulation at NVESD

    Science.gov (United States)

    Maurer, Tana; Nguyen, Oanh; Thomas, Jim; Boettcher, Evelyn

    2009-05-01

    The U.S. Army's Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division is responsible for developing and enhancing electro-optic/infrared sensor performance models that are used in wargames and for sensor trade studies. Predicting how well a sensor performs a military task depends on both the physics of the sensor and how well observers perform specific tasks while using that sensor. An example of such a task could be to search and detect targets of military interest. Another task could be to identify a target as a threat or non-threat. A typical sensor development program involves analyses and trade-offs among a number of variables such as field of view, resolution, range, compression techniques, etc. Observer performance results, obtained in the NVESD perception lab, provide essential information to bridge the gap between the physics of a system and the humans using that system. This information is then used to develop and validate models, to conduct design trade-off studies and to generate insights into the development of new systems for soldiers in surveillance, urban combat, and all types of military activities. Computer scientists and engineers in the perception lab design tests and process both real and simulated imagery in order to isolate the effect or design being studied. Then, in accordance with an approved protocol for human subjects research, experiments are administered to the desired number of observers. Results are tabulated and analyzed. The primary focus of this paper is to describe current capabilities of the NVESD perception lab regarding computer-based observer performance testing of sensor imagery, what types of experiments have been completed and plans for the future.

  20. A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection

    Directory of Open Access Journals (Sweden)

    Simeone Marino

    2016-10-01

    Full Text Available Tuberculosis (TB is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB. The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs, take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung to lung-draining lymph nodes (LNs where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim describing in silico immune cell, bacterial (Mtb and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form, lung draining lymph node (LN, site of generation of adaptive immunity and blood (a measurable compartment. Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM, while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs. In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is

  1. A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection.

    Science.gov (United States)

    Marino, Simeone; Kirschner, Denise E

    2016-01-01

    Tuberculosis (TB) is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB). The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs) is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs), take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung) to lung-draining lymph nodes (LNs) where they prime T cells to recognize Mtb . These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim ) describing in silico immune cell, bacterial (Mtb) and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form), lung draining lymph node (LN, site of generation of adaptive immunity) and blood (a measurable compartment). Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM), while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs). In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs) in a computational fashion into GranSim . The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. The main conclusion of this study is that early events after Mtb

  2. Automatic feature selection for model-based reinforcement learning in factored MDPs

    NARCIS (Netherlands)

    Kroon, M.; Whiteson, S.; Wani, M.A.; Kantardzic, M.; Palade, V.; Kurgan, L.; Qi, A.

    2009-01-01

    Feature selection is an important challenge in machine learning. Unfortunately, most methods for automating feature selection are designed for supervised learning tasks and are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature selection

  3. Modeling information flows in clinical decision support: key insights for enhancing system effectiveness.

    Science.gov (United States)

    Medlock, Stephanie; Wyatt, Jeremy C; Patel, Vimla L; Shortliffe, Edward H; Abu-Hanna, Ameen

    2016-09-01

    A fundamental challenge in the field of clinical decision support is to determine what characteristics of systems make them effective in supporting particular types of clinical decisions. However, we lack such a theory of decision support itself and a model to describe clinical decisions and the systems to support them. This article outlines such a framework. We present a two-stream model of information flow within clinical decision-support systems (CDSSs): reasoning about the patient (the clinical stream), and reasoning about the user (the cognitive-behavioral stream). We propose that CDSS "effectiveness" be measured not only in terms of a system's impact on clinical care, but also in terms of how (and by whom) the system is used, its effect on work processes, and whether it facilitates appropriate decisions by clinicians and patients. Future research into which factors improve the effectiveness of decision support should not regard CDSSs as a single entity, but should instead differentiate systems based on their attributes, users, and the decision being supported. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. A mouse model of harlequin ichthyosis delineates a key role for Abca12 in lipid homeostasis.

    Directory of Open Access Journals (Sweden)

    Ian Smyth

    2008-09-01

    Full Text Available Harlequin Ichthyosis (HI is a severe and often lethal hyperkeratotic skin disease caused by mutations in the ABCA12 transport protein. In keratinocytes, ABCA12 is thought to regulate the transfer of lipids into small intracellular trafficking vesicles known as lamellar bodies. However, the nature and scope of this regulation remains unclear. As part of an original recessive mouse ENU mutagenesis screen, we have identified and characterised an animal model of HI and showed that it displays many of the hallmarks of the disease including hyperkeratosis, loss of barrier function, and defects in lipid homeostasis. We have used this model to follow disease progression in utero and present evidence that loss of Abca12 function leads to premature differentiation of basal keratinocytes. A comprehensive analysis of lipid levels in mutant epidermis demonstrated profound defects in lipid homeostasis, illustrating for the first time the extent to which Abca12 plays a pivotal role in maintaining lipid balance in the skin. To further investigate the scope of Abca12's activity, we have utilised cells from the mutant mouse to ascribe direct transport functions to the protein and, in doing so, we demonstrate activities independent of its role in lamellar body function. These cells have severely impaired lipid efflux leading to intracellular accumulation of neutral lipids. Furthermore, we identify Abca12 as a mediator of Abca1-regulated cellular cholesterol efflux, a finding that may have significant implications for other diseases of lipid metabolism and homeostasis, including atherosclerosis.

  5. Theoretical Model of God: The Key to Correct Exploration of the Universe

    Science.gov (United States)

    Kalanov, Temur Z.

    2007-04-01

    The problem of the correct approach to exploration of the Universe cannot be solved if there is no solution of the problem of existence of God (Creator, Ruler) in science. In this connection, theoretical proof of existence of God is proposed. The theoretical model of God -- as scientific proof of existence of God -- is the consequence of the system of the formulated axioms. The system of the axioms contains, in particular, the following premises: (1) all objects formed (synthesized) by man are characterized by the essential property: namely, divisibility into aspects; (2) objects which can be mentally divided into aspects are objects formed (synthesized); (3) the system ``Universe'' is mentally divided into aspects. Consequently, the Universe represents the system formed (synthesized); (4) the theorem of existence of God (i.e. Absolute, Creator, Ruler) follows from the principle of logical completeness of system of concepts: if the formed (synthesized) system ``Universe'' exists, then God exists as the Absolute, the Creator, the Ruler of essence (i.e. information) and phenomenon (i.e. material objects). Thus, the principle of existence of God -- the content of the theoretical model of God -- must be a starting-point and basis of correct gnosiology and science of 21 century.

  6. Feature scale modeling for etching and deposition processes in semiconductor manufacturing

    International Nuclear Information System (INIS)

    Pyka, W.

    2000-04-01

    modeling of ballistic transport determined low-pressure processes, the equations for the calculation of local etching and deposition rates have been revised. New extensions like the full relation between angular and radial target emission characteristics and particle distributions resulting at different positions on the wafer have been added, and results from reactor scale simulations have been linked to the feature scale profile evolution. Moreover, a fitting model has been implemented, which reduces the number of parameters for particle distributions, scattering mechanisms, and angular dependent surface interactions. Concerning diffusion determined high-pressure CVD processes, a continuum transport and reaction model for the first time has been implemented in three dimensions. It comprises a flexible interface for the formulation of the involved process chemistry and derives the local deposition rate from a finite element diffusion calculation carried out on the three-dimensional mesh of the gas domain above the feature. For each time-step of the deposition simulation the mesh is automatically generated as counterpart to the surface of the three-dimensional structure evolving with time. The CVD model has also been coupled with equipment simulations. (author)

  7. Assessing the performance of community-available global MHD models using key system parameters and empirical relationships

    Science.gov (United States)

    Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.

    2015-12-01

    Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively

  8. A computational analysis of the three isoforms of glutamate dehydrogenase reveals structural features of the isoform EC 1.4.1.4 supporting a key role in ammonium assimilation by plants

    Directory of Open Access Journals (Sweden)

    Jaspard Emmanuel

    2006-12-01

    Full Text Available Abstract Background There are three isoforms of glutamate dehydrogenase. The isoform EC 1.4.1.4 (GDH4 catalyses glutamate synthesis from 2-oxoglutarate and ammonium, using NAD(PH. Ammonium assimilation is critical for plant growth. Although GDH4 from animals and prokaryotes are well characterized, there are few data concerning plant GDH4, even from those whose genomes are well annotated. Results A large set of the three GDH isoforms was built resulting in 116 non-redundant full polypeptide sequences. A computational analysis was made to gain more information concerning the structure – function relationship of GDH4 from plants (Eukaryota, Viridiplantae. The tested plant GDH4 sequences were the two ones known to date, those of Chlorella sorokiniana. This analysis revealed several structural features specific of plant GDH4: (i the lack of a structure called "antenna"; (ii the NAD(P-binding motif GAGNVA; and (iii a second putative coenzyme-binding motif GVLTGKG together with four residues involved in the binding of the reduced form of NADP. Conclusion A number of structural features specific of plant GDH4 have been found. The results reinforce the probable key role of GDH4 in ammonium assimilation by plants. Reviewers This article was reviewed by Tina Bakolitsa (nominated by Eugene Koonin, Martin Jambon (nominated by Laura Landweber, Sandor Pangor and Franck Eisenhaber.

  9. The Electric Vehicles Ecosystem Model: Construct, Analysis and Identification of Key Challenges

    Directory of Open Access Journals (Sweden)

    Zulkarnain

    2014-09-01

    Full Text Available This paper builds a conceptual model of electric vehicles’ (EV ecosystem and value chain build-up. Based on the literature, the research distinguishes the most critical challenges that are on the way of mobility systems’ electrification. Consumers still have some questions that call for answers before they are ready to adopt evs.With regard to technical aspects, some challenges are coming from vehicles, charging infrastructure, battery technology, and standardization. The use of battery in EVs will bring in additional environmental challenges, coming from the battery life cycle for used battery, the manufacturing, and from some materials used and treated in the manufacturing process. The policy aspects include mostly taxation strategies. For most part, established market conditions are still lacking and there are a number of unresolved challenges on both supply and demand side of the EV market.

  10. Probing molecular mechanisms of the Hsp90 chaperone: biophysical modeling identifies key regulators of functional dynamics.

    Science.gov (United States)

    Dixit, Anshuman; Verkhivker, Gennady M

    2012-01-01

    Deciphering functional mechanisms of the Hsp90 chaperone machinery is an important objective in cancer biology aiming to facilitate discovery of targeted anti-cancer therapies. Despite significant advances in understanding structure and function of molecular chaperones, organizing molecular principles that control the relationship between conformational diversity and functional mechanisms of the Hsp90 activity lack a sufficient quantitative characterization. We combined molecular dynamics simulations, principal component analysis, the energy landscape model and structure-functional analysis of Hsp90 regulatory interactions to systematically investigate functional dynamics of the molecular chaperone. This approach has identified a network of conserved regions common to the Hsp90 chaperones that could play a universal role in coordinating functional dynamics, principal collective motions and allosteric signaling of Hsp90. We have found that these functional motifs may be utilized by the molecular chaperone machinery to act collectively as central regulators of Hsp90 dynamics and activity, including the inter-domain communications, control of ATP hydrolysis, and protein client binding. These findings have provided support to a long-standing assertion that allosteric regulation and catalysis may have emerged via common evolutionary routes. The interaction networks regulating functional motions of Hsp90 may be determined by the inherent structural architecture of the molecular chaperone. At the same time, the thermodynamics-based "conformational selection" of functional states is likely to be activated based on the nature of the binding partner. This mechanistic model of Hsp90 dynamics and function is consistent with the notion that allosteric networks orchestrating cooperative protein motions can be formed by evolutionary conserved and sparsely connected residue clusters. Hence, allosteric signaling through a small network of distantly connected residue clusters may be

  11. Probing molecular mechanisms of the Hsp90 chaperone: biophysical modeling identifies key regulators of functional dynamics.

    Directory of Open Access Journals (Sweden)

    Anshuman Dixit

    Full Text Available Deciphering functional mechanisms of the Hsp90 chaperone machinery is an important objective in cancer biology aiming to facilitate discovery of targeted anti-cancer therapies. Despite significant advances in understanding structure and function of molecular chaperones, organizing molecular principles that control the relationship between conformational diversity and functional mechanisms of the Hsp90 activity lack a sufficient quantitative characterization. We combined molecular dynamics simulations, principal component analysis, the energy landscape model and structure-functional analysis of Hsp90 regulatory interactions to systematically investigate functional dynamics of the molecular chaperone. This approach has identified a network of conserved regions common to the Hsp90 chaperones that could play a universal role in coordinating functional dynamics, principal collective motions and allosteric signaling of Hsp90. We have found that these functional motifs may be utilized by the molecular chaperone machinery to act collectively as central regulators of Hsp90 dynamics and activity, including the inter-domain communications, control of ATP hydrolysis, and protein client binding. These findings have provided support to a long-standing assertion that allosteric regulation and catalysis may have emerged via common evolutionary routes. The interaction networks regulating functional motions of Hsp90 may be determined by the inherent structural architecture of the molecular chaperone. At the same time, the thermodynamics-based "conformational selection" of functional states is likely to be activated based on the nature of the binding partner. This mechanistic model of Hsp90 dynamics and function is consistent with the notion that allosteric networks orchestrating cooperative protein motions can be formed by evolutionary conserved and sparsely connected residue clusters. Hence, allosteric signaling through a small network of distantly connected

  12. Human Skeleton Model Based Dynamic Features for Walking Speed Invariant Gait Recognition

    Directory of Open Access Journals (Sweden)

    Jure Kovač

    2014-01-01

    Full Text Available Humans are able to recognize small number of people they know well by the way they walk. This ability represents basic motivation for using human gait as the means for biometric identification. Such biometrics can be captured at public places from a distance without subject's collaboration, awareness, and even consent. Although current approaches give encouraging results, we are still far from effective use in real-life applications. In general, methods set various constraints to circumvent the influence of covariate factors like changes of walking speed, view, clothing, footwear, and object carrying, that have negative impact on recognition performance. In this paper we propose a skeleton model based gait recognition system focusing on modelling gait dynamics and eliminating the influence of subjects appearance on recognition. Furthermore, we tackle the problem of walking speed variation and propose space transformation and feature fusion that mitigates its influence on recognition performance. With the evaluation on OU-ISIR gait dataset, we demonstrate state of the art performance of proposed methods.

  13. Validation of glaucoma-like features in the rat episcleral vein cauterization model.

    Science.gov (United States)

    Bai, Yujing; Zhu, Yingting; Chen, Qin; Xu, Jing; Sarunic, Marinko V; Saragovi, Uri H; Zhuo, Yehong

    2014-01-01

    Glaucoma, an irreversible optic nerve neuropathy, always results in blindness. This study aimed to evaluate glaucoma-like features in the rat episcleral vein cauterization (EVC) model by multiple in vivo and in vitro evidences. Wistar rat was used in this study. The elevated intraocular pressure (IOP) was induced by cauterization of three episcleral veins. IOP was monitored with Tono-Pen XL tonometer. Time-dependent changes to the neuronal retinal layers were quantified by Fourier domain-optical coherence tomography. The function of retina was evaluated by electroretinogram (ERG). Survival of retinal ganglion cells (RGCs) was quantified by retrograde labeling. Histology study was performed with retinal sections stained with hematoxylin-eosin, glial fibrillary acidic protein, and neuronal nuclear antigen. Retina and aqueous humor protein were extracted and cytotoxic protein tumor necrosis factor alpha (TNF-α) and alpha-2 macroglobulin (α2m) were measured with Western blotting. EVC is a relatively facile intervention, with low failure rates (EVC method can also induce glial cell activation and alterations of inflammation proteins, such as TNF-α and α2m. EVC method can establish a robust, reliable, economic and highly reproducible glaucomatous animal model.

  14. A Neonatal Mouse Model of Intermittent Hypoxia Associated with Features of Apnea in Premature Infants

    Science.gov (United States)

    CAI, JUN; TUONG, CHI MINH; GOZAL, DAVID

    2011-01-01

    A neonatal mouse model of intermittent hypoxia (IH) simulating the recurring hypoxia/reoxygenation episodes of apnea of prematurity (AOP) was developed. C57BL/6 P2 pups were culled for exposure to either intermittent hypoxia or intermittent air as control. The IH paradigms consisted of alternation cycles of 20.9% O2 and either 8.0% or 5.7% O2 every 120 or 140 seconds for 6 hours a day during daylight hours from day 2 to day 10 postnatally, i.e., roughly equivalent to human brain development in the perinatal period. IH exposures elicited modest to severe decrease in oxygen saturation along with bradycardia in neonatal mice, which were severity-dependent. Hypomyelination in both central and peripheral nervous systems was observed despite the absence of visible growth retardation. The neonatal mouse model of IH in this study partially fulfills the current diagnostic criteria with features of AOP, and provides opportunities to reproduce in rodents some of the pathophysiological changes associated with this disorder, such as alterations in myelination. PMID:21699999

  15. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Science.gov (United States)

    Guerrier, C.; Holcman, D.

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  16. HIGHLY QUALIFIED WORKING FORCE – KEY ELEMENT OF INNOVATIVE DEVELOPMENT MODEL

    Directory of Open Access Journals (Sweden)

    M. Avksientiev

    2014-12-01

    Full Text Available Highly qualified working force is a central element of intensive development model in modern society. The article surveys the experience of countries that managed to transform their economy to the innovative one. Ukrainian economy cannot stand aside processes that dominate the world economy trends, thus we are to use this experience to succeed in future. Today any government of the world is facing challenges that occur due to transformation of the economy into informational one. This type of economy causes its transformation form extensive to intensive one. The main reasons under that is limitation of nature resources, material factors of production. Thus this approach depends much on the quality of working force. Unfortunately in Ukraine there is a misbalance in specialist preparation. This puts additional pressure on the educational sphere also. In order to avoid this pressure we are to conduct reforms in education sphere. Nowadays, in the world views and concepts of governmental role in the social development are changing. This why, even at times of economic recession educational costs are not reduced under the new economical doctrine in the EU. Highly qualified specialists, while creating new products and services play role of engineers in XXI century. They are to lead their industries to world leading positions. From economic point of view, highly qualified specialists benefit society with higher income rates, taxation and thus, increasing the living standards in society. Thus, the majority if modern scientists prove the importance of highly trained working force for more effective economic development.

  17. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Energy Technology Data Exchange (ETDEWEB)

    Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  18. Integrated hydrologic modeling as a key for sustainable urban water resources planning.

    Science.gov (United States)

    Eshtawi, Tamer; Evers, Mariele; Tischbein, Bernhard; Diekkrüger, Bernd

    2016-09-15

    In this study, a coupling of surface water (SWAT), groundwater (MODFLOW) and solute transport (MT3DMS) models was performed to quantify surface-groundwater and quantity-quality interactions under urban area expansion. The responses of groundwater level, nitrate concentrations (related to human activities) and chloride concentrations (related to seawater intrusion) to urban area expansion and corresponding changes in the urban water budget were examined on a macro-scale level. The potentials of non-conventional water resources scenarios, namely desalination, stormwater harvesting and treated wastewater (TWW) reuse were investigated. In a novel analysis, groundwater improvement and deterioration under each scenario were defined in spatial-temporal approach. The quality deterioration cycle index was estimated as the ratio between the amounts of low and high quality recharge components within the Gaza Strip boundary predicted for year 2030. The improvement index for groundwater level (IIL) and the improvement index for groundwater quality (IIQ) were developed for the scenarios as measures of the effectiveness toward sustainable groundwater planning. Even though the desalination and TWW reuse scenarios reflect a noticeable improvement in the groundwater level, the desalination scenario shows a stronger tendency toward sustainable groundwater quality. The stormwater harvesting scenario shows a slight improvement in both groundwater quality and quantity. This study provides a 'corridor of options', which could facilitate future studies focusing on developing a micro-level assessment of the above scenarios. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    International Nuclear Information System (INIS)

    Guerrier, C.; Holcman, D.

    2017-01-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  20. Green sturgeon distribution in the Pacific Ocean estimated from modeled oceanographic features and migration behavior.

    Directory of Open Access Journals (Sweden)

    David D Huff

    Full Text Available The green sturgeon (Acipenser medirostris, which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.

  1. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    Science.gov (United States)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  2. WaterWorld, a spatial hydrological model applied at scales from local to global: key challenges to local application

    Science.gov (United States)

    Burke, Sophia; Mulligan, Mark

    2017-04-01

    WaterWorld is a widely used spatial hydrological policy support system. The last user census indicates regular use by 1029 institutions across 141 countries. A key feature of WaterWorld since 2001 is that it comes pre-loaded with all of the required data for simulation anywhere in the world at a 1km or 1 ha resolution. This means that it can be easily used, without specialist technical ability, to examine baseline hydrology and the impacts of scenarios for change or management interventions to support policy formulation, hence its labelling as a policy support system. WaterWorld is parameterised by an extensive global gridded database of more than 600 variables, developed from many sources, since 1998, the so-called simTerra database. All of these data are available globally at 1km resolution and some variables (terrain, land cover, urban areas, water bodies) are available globally at 1ha resolution. If users have access to better data than is pre-loaded, they can upload their own data. WaterWorld is generally applied at the national or basin scale at 1km resolution, or locally (for areas of maps to run including monthly climate data, land cover and use, terrain, population, water bodies and more. Whilst publically-available terrain and land cover data are now well developed for local scale application, climate and land use data remain a challenge, with most global products being available at 1km or 10km resolution or worse, which is rather coarse for local application. As part of the EartH2Observe project we have used WFDEI (WATCH Forcing Data methodology applied to ERA-Interim data) at 1km resolution to provide an alternative input to WaterWorld's preloaded climate data. Here we examine the impacts of that on key hydrological outputs: water balance, water quality and outline the remaining challenges of using datasets like these for local scale application.

  3. Features of development and analysis of the simulation model of a multiprocessor computer system

    Directory of Open Access Journals (Sweden)

    O. M. Brekhov

    2017-01-01

    Full Text Available Over the past decade, multiprocessor systems have been applied in computer technology. At present,multi-core processors are equipped not only with supercomputers, but also with the vast majority of mobile devices. This creates the need for students to learn the basic principles of their construction and functioning.One of the possible methods for analyzing the operation of multiprocessor systems is simulation modeling.Its use contributes to a better understanding of the effect of workload and structure parameters on performance. The article considers the features of the development of the simulation model for estimating the time characteristics of a multiprocessor computer system, as well as the use of the regenerative method of model analysis. The characteristics of the software implementation of the inverse kinematics solution of the robot are adopted as a workload. The given task consists in definition of turns in joints of the manipulator on known angular and linear position of its grasp. An analytical algorithm for solving the problem was chosen, namely, the method of simple kinematic relations. The work of the program is characterized by the presence of parallel calculations, during which resource conflicts arise between the processor cores, involved in simultaneous access to the memory via a common bus. In connection with the high information connectivity between parallel running programs, it is assumed that all processing cores use shared memory. The simulation model takes into account probabilistic memory accesses and tracks emerging queues to shared resources. The collected statistics reveal the productive and overhead time costs for the program implementation for each processor core involved. The simulation results show the unevenness of kernel utilization, downtime in queues to shared resources and temporary losses while waiting for other cores due to information dependencies. The results of the simulation are estimated by the

  4. Modelling efforts needed to advance herpes simplex virus (HSV) vaccine development: Key findings from the World Health Organization Consultation on HSV Vaccine Impact Modelling.

    Science.gov (United States)

    Gottlieb, Sami L; Giersing, Birgitte; Boily, Marie-Claude; Chesson, Harrell; Looker, Katharine J; Schiffer, Joshua; Spicknall, Ian; Hutubessy, Raymond; Broutet, Nathalie

    2017-06-21

    Development of a vaccine against herpes simplex virus (HSV) is an important goal for global sexual and reproductive health. In order to more precisely define the health and economic burden of HSV infection and the theoretical impact and cost-effectiveness of an HSV vaccine, in 2015 the World Health Organization convened an expert consultation meeting on HSV vaccine impact modelling. The experts reviewed existing model-based estimates and dynamic models of HSV infection to outline critical future modelling needs to inform development of a comprehensive business case and preferred product characteristics for an HSV vaccine. This article summarizes key findings and discussions from the meeting on modelling needs related to HSV burden, costs, and vaccine impact, essential data needs to carry out those models, and important model components and parameters. Copyright © 2017. Published by Elsevier Ltd.

  5. Technology as system innovation: a key informant interview study of the application of the diffusion of innovation model to telecare.

    Science.gov (United States)

    Sugarhood, Paul; Wherton, Joseph; Procter, Rob; Hinder, Sue; Greenhalgh, Trisha

    2014-01-01

    To identify and explore factors that influence adoption, implementation and continued use of telecare technologies. As part of the Assistive Technologies for Healthy Living in Elders: Needs Assessment by Ethnography (ATHENE) project, 16 semi-structured interviews were conducted with key participants from organisations involved in developing and providing telecare technologies and services. Data were analysed thematically, using a conceptual model of diffusion of innovations. Participants identified numerous interacting factors that facilitated or hindered adoption and use. As predicted by the model, these related variously to the technology, individual adopters, the process of social influence, the innovativeness and readiness of organisations, implementation and routinisation processes following initial adoption, and the nature and strength of linkages between these elements. Key issues included (i) the complexity and uniqueness of the "user system", (ii) the ongoing work needed to support telecare use beyond initial adoption, and (iii) the relatively weak links that typically exist between users of telecare technologies and the organisations who design and distribute them. Telecare is not merely a technology but a complex innovation requiring input from, and coordination between, people and organisations. To promote adoption and use, these contextual factors must be specified, understood and addressed.

  6. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    Science.gov (United States)

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Photovoltaic-thermal (PV/T) solar collectors: Features and performance modelling

    International Nuclear Information System (INIS)

    Atienza-Márquez, Antonio; Bruno, Joan Carles; Coronas, Alberto; Korolija, Ivan; Greenough, Richard; Wright, Andy

    2017-01-01

    Currently, the electrical efficiency of photovoltaic (PV) solar cells ranges between 5–25%. One of the most important parameters that affects the electrical efficiency of a PV collector is the temperature of its cells: the higher temperature, the lower is the efficiency. Photovoltaic/thermal (PV/T) technology is a potential solution to ensure an acceptable solar energy conversion. The PV/T technology produces both electrical and thermal energy simultaneously. It is suitable for low temperature applications (25–40 o C) and overall efficiency increases compared to individual collectors. This paper describes an installation in a single-family house where PV/T collectors are coupled with a ground heat exchanger and a heat pump for domestic hot water and space heating purposes. The aim of this work is twofold. First, the features of the PV/T technology are analyzed. Second, a model of a flat-plate PV/T water collector was developed in TRNSYS in order to analyze collectors performance. (author)

  8. A Modified Feature Selection and Artificial Neural Network-Based Day-Ahead Load Forecasting Model for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmad

    2015-12-01

    Full Text Available In the operation of a smart grid (SG, day-ahead load forecasting (DLF is an important task. The SG can enhance the management of its conventional and renewable resources with a more accurate DLF model. However, DLF model development is highly challenging due to the non-linear characteristics of load time series in SGs. In the literature, DLF models do exist; however, these models trade off between execution time and forecast accuracy. The newly-proposed DLF model will be able to accurately predict the load of the next day with a fair enough execution time. Our proposed model consists of three modules; the data preparation module, feature selection and the forecast module. The first module makes the historical load curve compatible with the feature selection module. The second module removes redundant and irrelevant features from the input data. The third module, which consists of an artificial neural network (ANN, predicts future load on the basis of selected features. Moreover, the forecast module uses a sigmoid function for activation and a multi-variate auto-regressive model for weight updating during the training process. Simulations are conducted in MATLAB to validate the performance of our newly-proposed DLF model in terms of accuracy and execution time. Results show that our proposed modified feature selection and modified ANN (m(FS + ANN-based model for SGs is able to capture the non-linearity(ies in the history load curve with 97 . 11 % accuracy. Moreover, this accuracy is achieved at the cost of a fair enough execution time, i.e., we have decreased the average execution time of the existing FS + ANN-based model by 38 . 50 % .

  9. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  10. Modeling error in assessment of mammographic image features for improved computer-aided mammography training: initial experience

    Science.gov (United States)

    Mazurowski, Maciej A.; Tourassi, Georgia D.

    2011-03-01

    In this study we investigate the hypothesis that there exist patterns in erroneous assessment of BI-RADS image features among radiology trainees when performing diagnostic interpretation of mammograms. We also investigate whether these error making patterns can be captured by individual user models. To test our hypothesis we propose a user modeling algorithm that uses the previous readings of a trainee to identify whether certain BI-RADS feature values (e.g. "spiculated" value for "margin" feature) are associated with higher than usual likelihood that the feature will be assessed incorrectly. In our experiments we used readings of 3 radiology residents and 7 breast imaging experts for 33 breast masses for the following BI-RADS features: parenchyma density, mass margin, mass shape and mass density. The expert readings were considered as the gold standard. Rule-based individual user models were developed and tested using the leave one-one-out crossvalidation scheme. Our experimental evaluation showed that the individual user models are accurate in identifying cases for which errors are more likely to be made. The user models captured regularities in error making for all 3 residents. This finding supports our hypothesis about existence of individual error making patterns in assessment of mammographic image features using the BI-RADS lexicon. Explicit user models identifying the weaknesses of each resident could be of great use when developing and adapting a personalized training plan to meet the resident's individual needs. Such approach fits well with the framework of adaptive computer-aided educational systems in mammography we have proposed before.

  11. Anatomical features for an adequate choice of experimental animal model in biomedicine: II. Small laboratory rodents, rabbit, and pig.

    Science.gov (United States)

    Lossi, Laura; D'Angelo, Livia; De Girolamo, Paolo; Merighi, Adalberto

    2016-03-01

    The anatomical features distinctive to each of the very large array of species used in today's biomedical research must be born in mind when considering the correct choice of animal model(s), particularly when translational research is concerned. In this paper we take into consideration and discuss the most important anatomical and histological features of the commonest species of laboratory rodents (rat, mouse, guinea pig, hamster, and gerbil), rabbit, and pig related to their importance for applied research. Copyright © 2015 Elsevier GmbH. All rights reserved.

  12. Mathematical modeling of the circadian rhythm of key neuroendocrine-immune system players in rheumatoid arthritis: a systems biology approach.

    Science.gov (United States)

    Meyer-Hermann, Michael; Figge, Marc Thilo; Straub, Rainer H

    2009-09-01

    Healthy subjects and patients with rheumatoid arthritis (RA) exhibit circadian rhythms of the neuroendocrine-immune system. Understanding circadian dynamics is complex due to the nonlinear behavior of the neuroendocrine-immune network. This study was undertaken to seek and test a mathematical model for studying this network. We established a quantitative computational model to simulate nonlinear interactions between key factors in the neuroendocrine-immune system, such as plasma tumor necrosis factor (TNF), plasma cortisol (and adrenal cholesterol store), and plasma noradrenaline (NA) (and presynaptic NA store). The model was nicely fitted with measured reference data on healthy subjects and RA patients. Although the individual circadian pacemakers of cortisol, NA, and TNF were installed without a phase shift, the relative phase shift between these factors evolved as a consequence of the modeled network interactions. Combined long-term and short-term TNF increase (the "RA model") increased cortisol plasma levels for only a few days, and cholesterol stores started to become markedly depleted. This nicely demonstrated the phenomenon of inadequate cortisol secretion relative to plasma TNF levels, as a consequence of adrenal deficiency. Using the RA model, treatment with glucocorticoids between midnight and 2:00 AM was found to have the strongest inhibitory effect on TNF secretion, which supports recent studies on RA therapy. Long-term reduction of TNF levels by simulation of anti-TNF therapy normalized cholesterol stores under "RA" conditions. These first in silico studies of the neuroendocrine-immune system in rheumatology demonstrate that computational biology in medicine, making use of large collections of experimental data, supports understanding of the pathophysiology of complex nonlinear systems.

  13. Characterization of Behavioral, Neuropathological, Brain Metabolic and Key Molecular Changes in zQ175 Knock-In Mouse Model of Huntington's Disease.

    Directory of Open Access Journals (Sweden)

    Qi Peng

    Full Text Available Huntington's disease (HD is caused by an expansion of the trinucleotide poly (CAG tract located in exon 1 of the huntingtin (Htt gene leading to progressive neurodegeneration in selected brain regions, and associated functional impairments in motor, cognitive, and psychiatric domains. Since the discovery of the gene mutation that causes the disease, mouse models have been developed by different strategies. Recently, a new model, the zQ175 knock-in (KI line, was developed in an attempt to have the Htt gene in a context and causing a phenotype that more closely mimics HD in humans. The behavioral phenotype was characterized across the independent laboratories and important features reminiscent of human HD are observed in zQ175 mice. In the current study, we characterized the zQ175 model housed in an academic laboratory under reversed dark-light cycle, including motor function, in vivo longitudinal structural MRI imaging for brain volume, MRS for striatal metabolites, neuropathology, as well as a panel of key disease marker proteins in the striatum at different ages. Our results suggest that homozygous zQ175 mice exhibited significant brain atrophy before the motor deficits and brain metabolite changes. Altered striatal medium spiny neuronal marker, postsynaptic marker protein and complement component C1qC also characterized zQ175 mice. Our results confirmed that the zQ175 KI model is valuable in understanding of HD-like pathophysiology and evaluation of potential therapeutics. Our data also provide suggestions to select appropriate outcome measurements in preclinical studies using the zQ175 mice.

  14. Effect of pioglitazone on metabolic features in endotoxemia model in obese diabetic db/db mice.

    Science.gov (United States)

    Sharma, Manoranjan; Mohapatra, Jogeswar; Malik, Umar; Nagar, Jignesh; Chatterjee, Abhijit; Ramachandran, Balaraman; Jain, Mukul R

    2017-06-01

    Infectious diseases are more frequent in diabetic patients, leading to increased morbidity and mortality. Endotoxemia affects glucose metabolism and lipolytic capacity. The aims of the present study were to determine whether endotoxemia exacerbates metabolic features (adipose inflammation, adipogenesis, and insulin resistance [IR]) in an animal model of diabetes (i.e. db/db mice) after acute infection and the effects of pioglitazone. Female db/db mice treated with pioglitazone (3 and 30 mg/kg, p.o.) for 14 days were challenged with lipopolysaccharide (LPS; 200 μg/kg), followed by an oral glucose tolerance test (OGTT). Quantitative real-time polymerase chain reaction (PCR) was used to evaluate the expression of genes in white adipose tissue (WAT) involved in: (i) adipogenesis (lipoprotein lipase [Lpl], fatty acid binding protein-4 [Ap2] and adiponectin [Adipoq]); (ii) insulin signaling (peroxisome proliferator-activated receptor gamma [Pparg], suppressor of cytokine signaling 3 [Socs3], solute carrier family 2 [facilitated glucose transporter], member 4 [Slc2a4]); and (iii) inflammation (tumor necrosis factor [Tnf], interleukin-6 [Il6], monocyte chemoattractant protein-1 [Ccl2], cyclo-oxygenase-2 [prostaglandin-endoperoxide synthase 2; Ptgs2]). Experimental endotoxemia downregulated mRNA expression of Pparg, Slc2a4, Adipoq, Lpl, and Ap2, which coincided with upregulation of Il6, Tnf, Ccl2, Ptgs2, and Socs3 expression. Pioglitazone dose-dependently decreased Tnf, Il6, Ccl2, Ptgs2, and Socs3 expression in WAT, in association with upregulation of Lpl, Ap2, Slc2a4, and Adipoq expression, indicating improvement in endotoxin-induced IR. The findings suggest that LPS challenge exacerbates IR in db/db mice by altering the expression of genes in WAT involved in adipogenesis and inflammation, which is effectively controlled by pioglitazone treatment. © 2016 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and John Wiley & Sons Australia, Ltd.

  15. Robust Feature Extraction from ECG Signals Based on Nonlinear Dynamical Modeling

    National Research Council Canada - National Science Library

    Owis, Mohamed

    2001-01-01

    ...) signals to detect the presence of arrhythmia. Conventional methods of arrhythmia detection rely on observing morphological features of the signal in the time domain or after applying a certain transformation, Even though these techniques...

  16. Using Range-Wide Abundance Modeling to Identify Key Conservation Areas for the Micro-Endemic Bolson Tortoise (Gopherus flavomarginatus.

    Directory of Open Access Journals (Sweden)

    Cinthya A Ureña-Aranda

    Full Text Available A widespread biogeographic pattern in nature is that population abundance is not uniform across the geographic range of species: most occurrence sites have relatively low numbers, whereas a few places contain orders of magnitude more individuals. The Bolson tortoise Gopherus flavomarginatus is endemic to a small region of the Chihuahuan Desert in Mexico, where habitat deterioration threatens this species with extinction. In this study we combined field burrows counts and the approach for modeling species abundance based on calculating the distance to the niche centroid to obtain range-wide abundance estimates. For the Bolson tortoise, we found a robust, negative relationship between observed burrows abundance and distance to the niche centroid, with a predictive capacity of 71%. Based on these results we identified four priority areas for the conservation of this microendemic and threatened tortoise. We conclude that this approach may be a useful approximation for identifying key areas for sampling and conservation efforts in elusive and rare species.

  17. Using Range-Wide Abundance Modeling to Identify Key Conservation Areas for the Micro-Endemic Bolson Tortoise (Gopherus flavomarginatus).

    Science.gov (United States)

    Ureña-Aranda, Cinthya A; Rojas-Soto, Octavio; Martínez-Meyer, Enrique; Yáñez-Arenas, Carlos; Landgrave Ramírez, Rosario; Espinosa de los Monteros, Alejandro

    2015-01-01

    A widespread biogeographic pattern in nature is that population abundance is not uniform across the geographic range of species: most occurrence sites have relatively low numbers, whereas a few places contain orders of magnitude more individuals. The Bolson tortoise Gopherus flavomarginatus is endemic to a small region of the Chihuahuan Desert in Mexico, where habitat deterioration threatens this species with extinction. In this study we combined field burrows counts and the approach for modeling species abundance based on calculating the distance to the niche centroid to obtain range-wide abundance estimates. For the Bolson tortoise, we found a robust, negative relationship between observed burrows abundance and distance to the niche centroid, with a predictive capacity of 71%. Based on these results we identified four priority areas for the conservation of this microendemic and threatened tortoise. We conclude that this approach may be a useful approximation for identifying key areas for sampling and conservation efforts in elusive and rare species.

  18. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Directory of Open Access Journals (Sweden)

    Kamila M. Jozwik

    2017-10-01

    Full Text Available Recent advances in Deep convolutional Neural Networks (DNNs have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16. To create conceptual models, other human observers generated visual-feature labels (e.g., “eye” and category labels (e.g., “animal” for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from

  19. Improving ART programme retention and viral suppression are key to maximising impact of treatment as prevention - a modelling study.

    Science.gov (United States)

    McCreesh, Nicky; Andrianakis, Ioannis; Nsubuga, Rebecca N; Strong, Mark; Vernon, Ian; McKinley, Trevelyan J; Oakley, Jeremy E; Goldstein, Michael; Hayes, Richard; White, Richard G

    2017-08-09

    UNAIDS calls for fewer than 500,000 new HIV infections/year by 2020, with treatment-as-prevention being a key part of their strategy for achieving the target. A better understanding of the contribution to transmission of people at different stages of the care pathway can help focus intervention services at populations where they may have the greatest effect. We investigate this using Uganda as a case study. An individual-based HIV/ART model was fitted using history matching. 100 model fits were generated to account for uncertainties in sexual behaviour, HIV epidemiology, and ART coverage up to 2015 in Uganda. A number of different ART scale-up intervention scenarios were simulated between 2016 and 2030. The incidence and proportion of transmission over time from people with primary infection, post-primary ART-naïve infection, and people currently or previously on ART was calculated. In all scenarios, the proportion of transmission by ART-naïve people decreases, from 70% (61%-79%) in 2015 to between 23% (15%-40%) and 47% (35%-61%) in 2030. The proportion of transmission by people on ART increases from 7.8% (3.5%-13%) to between 14% (7.0%-24%) and 38% (21%-55%). The proportion of transmission by ART dropouts increases from 22% (15%-33%) to between 31% (23%-43%) and 56% (43%-70%). People who are currently or previously on ART are likely to play an increasingly large role in transmission as ART coverage increases in Uganda. Improving retention on ART, and ensuring that people on ART remain virally suppressed, will be key in reducing HIV incidence in Uganda.

  20. Deep feature extraction and combination for remote sensing image classification based on pre-trained CNN models

    Science.gov (United States)

    Chaib, Souleyman; Yao, Hongxun; Gu, Yanfeng; Amrani, Moussa

    2017-07-01

    Understanding a scene provided by Very High Resolution (VHR) satellite imagery has become a more and more challenging problem. In this paper, we propose a new method for scene classification based on different pre-trained Deep Features Learning Models (DFLMs). DFLMs are applied simultaneously to extract deep features from the VHR image scene, and then different basic operators are applied for features combination extracted with different pre-trained Convolutional Neural Networks (CNN) models. We conduct experiments on the public UC Merced benchmark dataset, which contains 21 different areal categories with sub-meter resolution. Experimental results demonstrate the effectiveness of the proposed method, as compared to several state-of-the-art methods.

  1. A prototype framework for models of socio-hydrology: identification of key feedback loops and parameterisation approach

    Science.gov (United States)

    Elshafei, Y.; Sivapalan, M.; Tonts, M.; Hipsey, M. R.

    2014-06-01

    It is increasingly acknowledged that, in order to sustainably manage global freshwater resources, it is critical that we better understand the nature of human-hydrology interactions at the broader catchment system scale. Yet to date, a generic conceptual framework for building models of catchment systems that include adequate representation of socioeconomic systems - and the dynamic feedbacks between human and natural systems - has remained elusive. In an attempt to work towards such a model, this paper outlines a generic framework for models of socio-hydrology applicable to agricultural catchments, made up of six key components that combine to form the coupled system dynamics: namely, catchment hydrology, population, economics, environment, socioeconomic sensitivity and collective response. The conceptual framework posits two novel constructs: (i) a composite socioeconomic driving variable, termed the Community Sensitivity state variable, which seeks to capture the perceived level of threat to a community's quality of life, and acts as a key link tying together one of the fundamental feedback loops of the coupled system, and (ii) a Behavioural Response variable as the observable feedback mechanism, which reflects land and water management decisions relevant to the hydrological context. The framework makes a further contribution through the introduction of three macro-scale parameters that enable it to normalise for differences in climate, socioeconomic and political gradients across study sites. In this way, the framework provides for both macro-scale contextual parameters, which allow for comparative studies to be undertaken, and catchment-specific conditions, by way of tailored "closure relationships", in order to ensure that site-specific and application-specific contexts of socio-hydrologic problems can be accommodated. To demonstrate how such a framework would be applied, two socio-hydrological case studies, taken from the Australian experience, are presented

  2. MOLECULAR MODELLING OF HUMAN ALDEHYDE OXIDASE AND IDENTIFICATION OF THE KEY INTERACTIONS IN THE ENZYME-SUBSTRATE COMPLEX

    Directory of Open Access Journals (Sweden)

    Siavoush Dastmalchi

    2005-05-01

    Full Text Available Aldehyde oxidase (EC 1.2.3.1, a cytosolic enzyme containing FAD, molybdenum and iron-sulphur cluster, is a member of non-cytochrome P-450 enzymes called molybdenum hydroxylases which is involved in the metabolism of a wide range of endogenous compounds and many drug substances. Drug metabolism is one of the important characteristics which influences many aspects of a therapeutic agent such as routes of administration, drug interaction and toxicity and therefore, characterisation of the key interactions between enzymes and substrates is very important from drug development point of view. The aim of this study was to generate a three-dimensional model of human aldehyde oxidase (AO in order to assist us to identify the mode of interaction between enzyme and a set of phethalazine/quinazoline derivatives. Both sequence-based (BLAST and inverse protein fold recognition methods (THREADER were used to identify the crystal structure of bovine xanthine dehydrogenase (pdb code of 1FO4 as the suitable template for comparative modelling of human AO. Model structure was generated by aligning and then threading the sequence of human AO onto the template structure, incorporating the associated cofactors, and molecular dynamics simulations and energy minimization using GROMACS program. Different criteria which were measured by the PROCHECK, QPACK, VERIFY-3D were indicative of a proper fold for the predicted structural model of human AO. For example, 97.9 percentages of phi and psi angles were in the favoured and most favoured regions in the ramachandran plot, and all residues in the model are assigned environmentally positive compatibility scores. Further evaluation on the model quality was performed by investigation of AO-mediated oxidation of a set of phthalazine/quinazoline derivatives to develop QSAR model capable of describing the extent of the oxidation. Substrates were aligned by docking onto the active site of the enzyme using GOLD technology and then

  3. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Lars J., E-mail: Lars.grimm@duke.edu; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie [Department of Radiology, Duke University Medical Center, Box 3808, Durham, North Carolina 27710 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina School of Medicine, 2006 Old Clinic, CB No. 7510, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Duke University Medical Center, Box 2731 Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  4. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features.

    Science.gov (United States)

    Grimm, Lars J; Ghate, Sujata V; Yoon, Sora C; Kuzmiak, Cherie M; Kim, Connie; Mazurowski, Maciej A

    2014-03-01

    The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502-0.739, 95% Confidence Interval: 0.543-0.680,p errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  5. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    International Nuclear Information System (INIS)

    Grimm, Lars J.; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees

  6. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    Science.gov (United States)

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  7. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    Bolck, A.; Ni, H.; Lopatka, M.

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are

  8. Evaluation of the UNC toluene-SOA mechanism with respect to other chamber studies and key model parameters

    Science.gov (United States)

    Hu, Di; Kamens, Richard M.

    In a companion paper by Hu et al. [2007. A kinetic mechanism for predicting secondary organic aerosol formation from toluene oxidation in the presence of NO x and natural sunlight. Atmospheric Environment, doi:10.1016/j.atmosenv.2007.04.025], a kinetic mechanism was developed from data generated in the University of North Carolina's (UNC) 270 m 3 dual outdoor aerosol smog chamber, to predict secondary organic aerosol (SOA) formation from toluene oxidation in the atmosphere. In this paper, experimental data sets from European Photoreactor (EUPHORE), smog chambers at the California Institute of Technology (Caltech), and the UNC 300 m 3 dual-outdoor gas phase chamber were used to evaluate the toluene mechanism. The model simulates SOA formation for the 'low-NO x' and 'mid-NO x' experiments from EUPHORE chambers reasonably well, but over-predicts SOA mass concentrations for the 'high-NO x' run. The model well simulates the SOA mass concentrations observed from the Caltech chambers. Experiments with the three key toluene products, 1,4-butenedial, 4-oxo-2-pentenal and o-cresol in the presence of oxides of nitrogen (NO x) are also simulated by the developed mechanism. The model well predicts the NO x time-concentration profiles and the decay of these two carbonyls, but underestimates ozone (O 3) formation for 4-oxo-2-pentenal. It well simulates SOA formation from 1,4-butenedial but overestimates (possibly due to experimental problems) the measured aerosol mass concentrations from 4-oxo-2-pentenal. The model underestimates SOA production from o-cresol, mostly due to its under-prediction of o-cresol decay. The effects of varying temperature, relative humidity, glyoxal uptake, organic nitrate yields, and background seed aerosol concentrations, were also investigated.

  9. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  10. Exploring Secondary Students' Epistemological Features Depending on the Evaluation Levels of the Group Model on Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kim, Heui-Baik

    2014-05-01

    The purpose of this study is to identify the epistemological features and model qualities depending on model evaluation levels and to explore the reasoning process behind high-level evaluation through small group interaction about blood circulation. Nine groups of three to four students in the eighth grade participated in the modeling practice. Their group models, which were represented by discourse and blood circulation diagrams, were analyzed for the development of the framework that informed the model evaluation levels and epistemological features. The model evaluation levels were categorized into levels one to four based on the following evaluation criteria: no evaluation, authoritative sources, superficial criteria, and more comprehensive criteria. The qualities of group models varied with the criteria of model evaluation. While students who used authoritative sources for evaluating the group model appeared to have an absolutist epistemology, students who evaluated according to the superficial criteria and more comprehensive criteria appeared to have an evaluative epistemology. Furthermore, groups with Level four showed a chain reaction of cognitive reasoning during the modeling practice concerning practical epistemology. The findings have implications for science teachers and education researchers who want to understand the context for developing students' practical epistemologies.

  11. Key factors in paediatric organ and tissue donation: an overview of literature in a chronological working model.

    Science.gov (United States)

    Siebelink, Marion J; Albers, Marcel J I J; Roodbol, Petrie F; van de Wiel, Harry B M

    2012-03-01

    There is a growing shortage of size-matched organs and tissues for children. Although examples of substandard care are reported in the literature, there is no overview of the paediatric donation process. The aim of the study is to gain insight into the chain of events, practices and procedures in paediatric donation. Method; a survey of the 1990-2010 literature on paediatric organ and tissue donation and categorization into a coherent chronological working model of key events and procedures. Studies on paediatric donation are rare. Twelve empirical studies were found, without any level I or level II-1 evidence. Seventy-five per cent of the studies describe the situation in the United States. Literature suggests that the identification of potential donors and the way in which parental consent is requested may be substandard. We found no literature discussing best practices. Notwithstanding the importance of looking at donation care as an integrated process, most studies discuss only a few isolated topics or sub-processes. To improve paediatric donation, more research is required on substandard factors and their interactions. A chronological working model, as presented here, starting with the identification of potential donors and ending with aftercare, could serve as a practical tool to optimize paediatric donation. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.

  12. Modelling on c-Si/a-Si:H wire solar cells: some key parameters to optimize the photovoltaic performance

    Directory of Open Access Journals (Sweden)

    Alvarez J.

    2012-07-01

    Full Text Available Solar cells based on silicon nano- or micro-wires have attracted much attention as a promising path for low cost photovoltaic technology. The key point of this structure is the decoupling of the light absorption from the carriers collection. In order to predict and optimize the performance potential of p- (or n- doped c-Si/ n-(or p- doped a-Si:H nanowire-based solar cells, we have used the Silvaco-Atlas software to model a single-wire device. In particular, we have noticed a drastic decrease of the open-circuit voltage (Voc when increasing the doping density of the silicon core beyond an optimum value. We present here a detailed study of the parameters that can alter the Voc of c-Si(p/a-Si:H (n wires according to the doping density in c-Si. A comparison with simulation results obtained on planar c-Si/a-Si:H heterojunctions shows that the drop in Voc, linked to an increase of the dark current in both structures, is more pronounced for radial junctions due to geometric criteria. These numerical modelling results have lead to a better understanding of transport phenomena within the wire.

  13. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  14. What is eHealth (6)? Development of a Conceptual Model for eHealth: Qualitative Study with Key Informants.

    Science.gov (United States)

    Shaw, Tim; McGregor, Deborah; Brunner, Melissa; Keep, Melanie; Janssen, Anna; Barnet, Stewart

    2017-10-24

    Despite rapid growth in eHealth research, there remains a lack of consistency in defining and using terms related to eHealth. More widely cited definitions provide broad understanding of eHealth but lack sufficient conceptual clarity to operationalize eHealth and enable its implementation in health care practice, research, education, and policy. Definitions that are more detailed are often context or discipline specific, limiting ease of translation of these definitions across the breadth of eHealth perspectives and situations. A conceptual model of eHealth that adequately captures its complexity and potential overlaps is required. This model must also be sufficiently detailed to enable eHealth operationalization and hypothesis testing. This study aimed to develop a conceptual practice-based model of eHealth to support health professionals in applying eHealth to their particular professional or discipline contexts. We conducted semistructured interviews with key informants (N=25) from organizations involved in health care delivery, research, education, practice, governance, and policy to explore their perspectives on and experiences with eHealth. We used purposeful sampling for maximum diversity. Interviews were coded and thematically analyzed for emergent domains. Thematic analyses revealed 3 prominent but overlapping domains of eHealth: (1) health in our hands (using eHealth technologies to monitor, track, and inform health), (2) interacting for health (using digital technologies to enable health communication among practitioners and between health professionals and clients or patients), and (3) data enabling health (collecting, managing, and using health data). These domains formed a model of eHealth that addresses the need for clear definitions and a taxonomy of eHealth while acknowledging the fluidity of this area and the strengths of initiatives that span multiple eHealth domains. This model extends current understanding of eHealth by providing clearly

  15. Using active contour models for feature extraction in camera-based seam tracking of arc welding

    DEFF Research Database (Denmark)

    Liu, Jinchao; Fan, Zhun; Olsen, Søren

    2009-01-01

    . It is highly desirable to extract groove features closer to the arc and thus facilitate for a nearly-closed-loop control situation. On the other hand, for performing seam tracking and nearly-closed-loop control it is not necessary to obtain very detailed information about the molten pool area as long as some...

  16. Modeling vehicle emissions in different types of Chinese cities: Importance of vehicle fleet and local features

    International Nuclear Information System (INIS)

    Huo Hong; Zhang Qiang; He Kebin; Yao Zhiliang; Wang Xintong; Zheng Bo; Streets, David G.; Wang Qidong; Ding Yan

    2011-01-01

    We propose a method to simulate vehicle emissions in Chinese cities of different sizes and development stages. Twenty two cities are examined in this study. The target year is 2007. Among the cities, the vehicle emission factors were remarkably different (the highest is 50-90% higher than the lowest) owing to their distinct local features and vehicle technology levels, and the major contributors to total vehicle emissions were also different. A substantial increase in vehicle emissions is foreseeable unless stronger measures are implemented because the benefit of current policies can be quickly offset by the vehicle growth. Major efforts should be focused on all cities, especially developing cities where the requirements are lenient. This work aims a better understanding of vehicle emissions in all types of Chinese cities. The proposed method could benefit national emission inventory studies in improving accuracy and help in designing national and local policies for vehicle emission control. - Highlights: → We examine vehicle emissions in 22 Chinese cities of different types and locations. → Vehicle emission factors of the cities differ by 50-90% due to distinct local features. → Each vehicle type contributes differently to total emissions among the cities. → A substantial increase in vehicle emissions in most Chinese cities is foreseeable. → City-specific fleet and local features are important in research and policy making. - Vehicle emission characteristics of Chinese cities are remarkably different, and local features need to be taken into account in vehicle emission studies and control strategy.

  17. Unifying model of shoot gravitropism reveals proprioception as a central feature of posture control in plants

    DEFF Research Database (Denmark)

    Bastien, Renaud; Bohr, Tomas; Moulia, Bruno

    2012-01-01

    Gravitropism, the slow reorientation of plant growth in response to gravity, is a key determinant of the form and posture of land plants. Shoot gravitropism is triggered when statocysts sense the local angle of the growing organ relative to the gravitational field. Lateral transport of the hormone...

  18. Research and Application of Hybrid Forecasting Model Based on an Optimal Feature Selection System—A Case Study on Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yunxuan Dong

    2017-04-01

    Full Text Available The process of modernizing smart grid prominently increases the complexity and uncertainty in scheduling and operation of power systems, and, in order to develop a more reliable, flexible, efficient and resilient grid, electrical load forecasting is not only an important key but is still a difficult and challenging task as well. In this paper, a short-term electrical load forecasting model, with a unit for feature learning named Pyramid System and recurrent neural networks, has been developed and it can effectively promote the stability and security of the power grid. Nine types of methods for feature learning are compared in this work to select the best one for learning target, and two criteria have been employed to evaluate the accuracy of the prediction intervals. Furthermore, an electrical load forecasting method based on recurrent neural networks has been formed to achieve the relational diagram of historical data, and, to be specific, the proposed techniques are applied to electrical load forecasting using the data collected from New South Wales, Australia. The simulation results show that the proposed hybrid models can not only satisfactorily approximate the actual value but they are also able to be effective tools in the planning of smart grids.

  19. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching.

    Science.gov (United States)

    Cohen, Yarden; Schneidman, Elad

    2013-01-08

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.

  20. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  1. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  2. Novel Feature Modelling the Prediction and Detection of sEMG Muscle Fatigue towards an Automated Wearable System

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2010-05-01

    Full Text Available Surface Electromyography (sEMG activity of the biceps muscle was recorded from ten subjects performing isometric contraction until fatigue. A novel feature (1D spectro_std was used to extract the feature that modeled three classes of fatigue, which enabled the prediction and detection of fatigue. Initial results of class separation were encouraging, discriminating between the three classes of fatigue, a longitudinal classification on Non-Fatigue and Transition-to-Fatigue shows 81.58% correct classification with accuracy 0.74 of correct predictions while the longitudinal classification on Transition-to-Fatigue and Fatigue showed lower average correct classification of 66.51% with a positive classification accuracy 0.73 of correct prediction. Comparison of the 1D spectro_std with other sEMG fatigue features on the same dataset show a significant improvement in classification, where results show a significant 20.58% (p < 0.01 improvement when using the 1D spectro_std to classify Non-Fatigue and Transition-to-Fatigue. In classifying Transition-to-Fatigue and Fatigue results also show a significant improvement over the other features giving 8.14% (p < 0.05 on average of all compared features.

  3. Identification of key amino acid residues in the hTGR5-nomilin interaction and construction of its binding model.

    Science.gov (United States)

    Sasaki, Takashi; Mita, Moeko; Ikari, Naho; Kuboyama, Ayane; Hashimoto, Shuzo; Kaneko, Tatsuya; Ishiguro, Masaji; Shimizu, Makoto; Inoue, Jun; Sato, Ryuichiro

    2017-01-01

    TGR5, a member of the G protein-coupled receptor (GPCR) family, is activated by bile acids. Because TGR5 promotes energy expenditure and improves glucose homeostasis, it is recognized as a key target in treating metabolic diseases. We previously showed that nomilin, a citrus limonoid, activates TGR5 and confers anti-obesity and anti-hyperglycemic effects in mice. Information on the TGR5-nomilin interaction regarding molecular structure, however, has not been reported. In the present study, we found that human TGR5 (hTGR5) shows higher nomilin responsiveness than does mouse TGR5 (mTGR5). Using mouse-human chimeric TGR5, we also found that three amino acid residues (Q77ECL1, R80ECL1, and Y893.29) are important in the hTGR5-nomilin interaction. Based on these results, an hTGR5-nomilin binding model was constructed using in silico docking simulation, demonstrating that four hydrophilic hydrogen-bonding interactions occur between nomilin and hTGR5. The binding mode of hTGR5-nomilin is vastly different from those of other TGR5 agonists previously reported, suggesting that TGR5 forms various binding patterns depending on the type of agonist. Our study promotes a better understanding of the structure of TGR5, and it may be useful in developing and screening new TGR5 agonists.

  4. A Real-Time Recording Model of Key Indicators for Energy Consumption and Carbon Emissions of Sustainable Buildings

    Directory of Open Access Journals (Sweden)

    Weiwei Wu

    2014-05-01

    Full Text Available Buildings’ sustainability is one of the crucial parts for achieving urban sustainability. Applied to buildings, life-cycle assessment encompasses the analysis and assessment of the environmental effects of building materials, components and assemblies throughout the entire life of the building construction, use and demolition. Estimate of carbon emissions is essential and crucial for an accurate and reasonable life-cycle assessment. Addressing the need for more research into integrating analysis of real-time and automatic recording of key indicators for a more accurate calculation and comparison, this paper aims to design a real-time recording model of these crucial indicators concerning the calculation and estimation of energy use and carbon emissions of buildings based on a Radio Frequency Identification (RFID-based system. The architecture of the RFID-based carbon emission recording/tracking system, which contains four functional layers including data record layer, data collection/update layer, data aggregation layer and data sharing/backup layer, is presented. Each of these layers is formed by RFID or network devices and sub-systems that operate at a specific lev