WorldWideScience

Sample records for model key features

  1. The Progressive BSSG Rat Model of Parkinson's: Recapitulating Multiple Key Features of the Human Disease.

    Directory of Open Access Journals (Sweden)

    Jackalina M Van Kampen

    Full Text Available The development of effective neuroprotective therapies for Parkinson's disease (PD has been severely hindered by the notable lack of an appropriate animal model for preclinical screening. Indeed, most models currently available are either acute in nature or fail to recapitulate all characteristic features of the disease. Here, we present a novel progressive model of PD, with behavioural and cellular features that closely approximate those observed in patients. Chronic exposure to dietary phytosterol glucosides has been found to be neurotoxic. When fed to rats, β-sitosterol β-d-glucoside (BSSG triggers the progressive development of parkinsonism, with clinical signs and histopathology beginning to appear following cessation of exposure to the neurotoxic insult and continuing to develop over several months. Here, we characterize the progressive nature of this model, its non-motor features, the anatomical spread of synucleinopathy, and response to levodopa administration. In Sprague Dawley rats, chronic BSSG feeding for 4 months triggered the progressive development of a parkinsonian phenotype and pathological events that evolved slowly over time, with neuronal loss beginning only after toxin exposure was terminated. At approximately 3 months following initiation of BSSG exposure, animals displayed the early emergence of an olfactory deficit, in the absence of significant dopaminergic nigral cell loss or locomotor deficits. Locomotor deficits developed gradually over time, initially appearing as locomotor asymmetry and developing into akinesia/bradykinesia, which was reversed by levodopa treatment. Late-stage cognitive impairment was observed in the form of spatial working memory deficits, as assessed by the radial arm maze. In addition to the progressive loss of TH+ cells in the substantia nigra, the appearance of proteinase K-resistant intracellular α-synuclein aggregates was also observed to develop progressively, appearing first in the

  2. A study of key features of the RAE atmospheric turbulence model

    Science.gov (United States)

    Jewell, W. F.; Heffley, R. K.

    1978-01-01

    A complex atmospheric turbulence model for use in aircraft simulation is analyzed in terms of its temporal, spectral, and statistical characteristics. First, a direct comparison was made between cases of the RAE model and the more conventional Dryden turbulence model. Next the control parameters of the RAE model were systematically varied and the effects noted. The RAE model was found to possess a high degree of flexibility in its characteristics, but the individual control parameters are cross-coupled in terms of their effect on various measures of intensity, bandwidth, and probability distribution.

  3. Key features of the IPSL ocean atmosphere model and its sensitivity to atmospheric resolution

    Energy Technology Data Exchange (ETDEWEB)

    Marti, Olivier; Braconnot, P.; Bellier, J.; Brockmann, P.; Caubel, A.; Noblet, N. de; Friedlingstein, P.; Idelkadi, A.; Kageyama, M. [Unite Mixte CEA-CNRS-UVSQ, IPSL/LSCE, Gif-sur-Yvette Cedex (France); Dufresne, J.L.; Bony, S.; Codron, F.; Fairhead, L.; Grandpeix, J.Y.; Hourdin, F.; Musat, I. [Unite Mixte CNRS-Ecole Polytechnique-ENS-UPCM, IPSL/LMD, Paris Cedex 05 (France); Benshila, R.; Guilyardi, E.; Levy, C.; Madec, G.; Mignot, J.; Talandier, C. [unite mixte CNRS-IRD-UPMC, IPLS/LOCEAN, Paris Cedex 05 (France); Cadule, P.; Denvil, S.; Foujols, M.A. [Institut Pierre Simon Laplace des Sciences de l' Environnement (IPSL), Paris Cedex 05 (France); Fichefet, T.; Goosse, H. [Universite Catholique de Louvain, Institut d' Astronomie et de Geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Krinner, G. [Unite mixte CNRS-UJF Grenoble, LGGE, BP96, Saint-Martin-d' Heres (France); Swingedouw, D. [CNRS/CERFACS, Toulouse (France)

    2010-01-15

    This paper presents the major characteristics of the Institut Pierre Simon Laplace (IPSL) coupled ocean-atmosphere general circulation model. The model components and the coupling methodology are described, as well as the main characteristics of the climatology and interannual variability. The model results of the standard version used for IPCC climate projections, and for intercomparison projects like the Paleoclimate Modeling Intercomparison Project (PMIP 2) are compared to those with a higher resolution in the atmosphere. A focus on the North Atlantic and on the tropics is used to address the impact of the atmosphere resolution on processes and feedbacks. In the North Atlantic, the resolution change leads to an improved representation of the storm-tracks and the North Atlantic oscillation. The better representation of the wind structure increases the northward salt transports, the deep-water formation and the Atlantic meridional overturning circulation. In the tropics, the ocean-atmosphere dynamical coupling, or Bjerknes feedback, improves with the resolution. The amplitude of ENSO (El Nino-Southern oscillation) consequently increases, as the damping processes are left unchanged. (orig.)

  4. Iris recognition based on key image feature extraction.

    Science.gov (United States)

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  5. Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas

    2015-01-01

    This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....

  6. Key Features of the Manufacturing Vision Development Process

    DEFF Research Database (Denmark)

    Dukovska-Popovska, Iskra; Riis, Jens Ove; Boer, Harry

    2005-01-01

    of action research. The methodology recommends wide participation of people from different hierarchical and functional positions, who engage in a relatively short, playful and creative process and come up with a vision (concept) for the future manufacturing system in the company. Based on three case studies......This paper discusses the key features of the process of Manufacturing Vision Development, a process that enables companies to develop their future manufacturing concept. The basis for the process is a generic five-phase methodology (Riis and Johansen 2003) developed as a result of ten years...... of companies going through the initial phases of the methodology, this research identified the key features of the Manufacturing Vision Development process. The paper elaborates the key features by defining them, discussing how and when they can appear, and how they influence the process....

  7. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  8. A preclinical orthotopic model for glioblastoma recapitulates key features of human tumors and demonstrates sensitivity to a combination of MEK and PI3K pathway inhibitors.

    Science.gov (United States)

    El Meskini, Rajaa; Iacovelli, Anthony J; Kulaga, Alan; Gumprecht, Michelle; Martin, Philip L; Baran, Maureen; Householder, Deborah B; Van Dyke, Terry; Weaver Ohler, Zoë

    2015-01-01

    Current therapies for glioblastoma multiforme (GBM), the highest grade malignant brain tumor, are mostly ineffective, and better preclinical model systems are needed to increase the successful translation of drug discovery efforts into the clinic. Previous work describes a genetically engineered mouse (GEM) model that contains perturbations in the most frequently dysregulated networks in GBM (driven by RB, KRAS and/or PI3K signaling and PTEN) that induce development of Grade IV astrocytoma with properties of the human disease. Here, we developed and characterized an orthotopic mouse model derived from the GEM that retains the features of the GEM model in an immunocompetent background; however, this model is also tractable and efficient for preclinical evaluation of candidate therapeutic regimens. Orthotopic brain tumors are highly proliferative, invasive and vascular, and express histology markers characteristic of human GBM. Primary tumor cells were examined for sensitivity to chemotherapeutics and targeted drugs. PI3K and MAPK pathway inhibitors, when used as single agents, inhibited cell proliferation but did not result in significant apoptosis. However, in combination, these inhibitors resulted in a substantial increase in cell death. Moreover, these findings translated into the in vivo orthotopic model: PI3K or MAPK inhibitor treatment regimens resulted in incomplete pathway suppression and feedback loops, whereas dual treatment delayed tumor growth through increased apoptosis and decreased tumor cell proliferation. Analysis of downstream pathway components revealed a cooperative effect on target downregulation. These concordant results, together with the morphologic similarities to the human GBM disease characteristics of the model, validate it as a new platform for the evaluation of GBM treatment. © 2015. Published by The Company of Biologists Ltd.

  9. A preclinical orthotopic model for glioblastoma recapitulates key features of human tumors and demonstrates sensitivity to a combination of MEK and PI3K pathway inhibitors

    Directory of Open Access Journals (Sweden)

    Rajaa El Meskini

    2015-01-01

    Full Text Available Current therapies for glioblastoma multiforme (GBM, the highest grade malignant brain tumor, are mostly ineffective, and better preclinical model systems are needed to increase the successful translation of drug discovery efforts into the clinic. Previous work describes a genetically engineered mouse (GEM model that contains perturbations in the most frequently dysregulated networks in GBM (driven by RB, KRAS and/or PI3K signaling and PTEN that induce development of Grade IV astrocytoma with properties of the human disease. Here, we developed and characterized an orthotopic mouse model derived from the GEM that retains the features of the GEM model in an immunocompetent background; however, this model is also tractable and efficient for preclinical evaluation of candidate therapeutic regimens. Orthotopic brain tumors are highly proliferative, invasive and vascular, and express histology markers characteristic of human GBM. Primary tumor cells were examined for sensitivity to chemotherapeutics and targeted drugs. PI3K and MAPK pathway inhibitors, when used as single agents, inhibited cell proliferation but did not result in significant apoptosis. However, in combination, these inhibitors resulted in a substantial increase in cell death. Moreover, these findings translated into the in vivo orthotopic model: PI3K or MAPK inhibitor treatment regimens resulted in incomplete pathway suppression and feedback loops, whereas dual treatment delayed tumor growth through increased apoptosis and decreased tumor cell proliferation. Analysis of downstream pathway components revealed a cooperative effect on target downregulation. These concordant results, together with the morphologic similarities to the human GBM disease characteristics of the model, validate it as a new platform for the evaluation of GBM treatment.

  10. Slim Battery Modelling Features

    Science.gov (United States)

    Borthomieu, Y.; Prevot, D.

    2011-10-01

    Saft has developed a life prediction model for VES and MPS cells and batteries. The Saft Li-ion Model (SLIM) is a macroscopic electrochemical model based on energy (global at cell level). The main purpose is to predict the battery performances during the life for GEO, MEO and LEO missions. This model is based on electrochemical characteristics such as Energy, Capacity, EMF, Internal resistance, end of charge voltage. It uses fading and calendar law effects on energy and internal impedance vs. time, temperature, End of Charge voltage. Based on the mission profile, satellite power system characteristics, the model proposes the various battery configurations. For each configuration, the model gives the battery performances using mission figures and profiles: power, duration, DOD, end of charge voltages, temperatures during eclipses and solstices, thermal dissipations and cell failures. For the GEO/MEO missions, eclipse and solstice periods can include specific profile such as plasmic propulsion fires and specific balancing operations. For LEO missions, the model is able to simulate high power peaks to predict radar pulses. Saft's main customers have been using the SLIM model available in house for two years. The purpose is to have the satellite builder power engineers able to perform by themselves in the battery pre-dimensioning activities their own battery simulations. The simulations can be shared with Saft engineers to refine the power system designs. This model has been correlated with existing life and calendar tests performed on all the VES and MPS cells. In comparing with more than 10 year lasting life tests, the accuracy of the model from a voltage point of view is less than 10 mV at end Of Life. In addition, thethe comparison with in-orbit data has been also done. b This paper will present the main features of the SLIM software and outputs comparison with real life tests. b0

  11. Key clinical features to identify girls with CDKL5 mutations.

    Science.gov (United States)

    Bahi-Buisson, Nadia; Nectoux, Juliette; Rosas-Vargas, Haydeé; Milh, Mathieu; Boddaert, Nathalie; Girard, Benoit; Cances, Claude; Ville, Dorothée; Afenjar, Alexandra; Rio, Marlène; Héron, Delphine; N'guyen Morel, Marie Ange; Arzimanoglou, Alexis; Philippe, Christophe; Jonveaux, Philippe; Chelly, Jamel; Bienvenu, Thierry

    2008-10-01

    Mutations in the human X-linked cyclin-dependent kinase-like 5 (CDKL5) gene have been shown to cause infantile spasms as well as Rett syndrome (RTT)-like phenotype. To date, less than 25 different mutations have been reported. So far, there are still little data on the key clinical diagnosis criteria and on the natural history of CDKL5-associated encephalopathy. We screened the entire coding region of CDKL5 for mutations in 183 females with encephalopathy with early seizures by denaturing high liquid performance chromatography and direct sequencing, and we identified in 20 unrelated girls, 18 different mutations including 7 novel mutations. These mutations were identified in eight patients with encephalopathy with RTT-like features, five with infantile spasms and seven with encephalopathy with refractory epilepsy. Early epilepsy with normal interictal EEG and severe hypotonia are the key clinical features in identifying patients likely to have CDKL5 mutations. Our study also indicates that these patients clearly exhibit some RTT features such as deceleration of head growth, stereotypies and hand apraxia and that these RTT features become more evident in older and ambulatory patients. However, some RTT signs are clearly absent such as the so called RTT disease profile (period of nearly normal development followed by regression with loss of acquired fine finger skill in early childhood and characteristic intensive eye communication) and the characteristic evolution of the RTT electroencephalogram. Interestingly, in addition to the overall stereotypical symptomatology (age of onset and evolution of the disease) resulting from CDKL5 mutations, atypical forms of CDKL5-related conditions have also been observed. Our data suggest that phenotypic heterogeneity does not correlate with the nature or the position of the mutations or with the pattern of X-chromosome inactivation, but most probably with the functional transcriptional and/or translational consequences of CDKL5

  12. Overall Design Features and Key Technology Development for KJRR

    Energy Technology Data Exchange (ETDEWEB)

    Park, C.; Lee, B. C.; Ryu, J. S.; Kim, Y. K. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The KJRR (Ki-Jang Research Reactor) project was launched on Apr., 2012; 1) to make up the advanced technology related to RRs, 2) to provide the self-sufficiency in terms of medical and industrial radioisotope (RI) supply, and 3) to enlarge the NTD silicon doping services for growing the power device industry. The major facilities to be built through the KJRR project are, • 15 MW Research Reactor and Reactor building • Radioisotopes Production Facility (RIPF) and related R and D Facility • Fission Mo Production Facility (FMPF) with LEU Target • Radio-waste Treatment Facility (RTF) • Neutron Irradiation Facility such as PTS and HTS. This paper describes the overall design features of the KJRR and the key technology development for RRs during the project. The overall design features of the KJRR and RR technology under development have been overviewd. The design of the KJRR will comply with the Korean Nuclear Law, regulatory requirements and guidelines as well as international standards and guidelines. The KJRR is expected to be put into operation in the middle of 2019.

  13. Genomic Feature Models

    DEFF Research Database (Denmark)

    Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun

    -additive genetic mechanisms. These modeling approaches have proven to be highly useful to determine population genetic parameters as well as prediction of genetic risk or value. We present a series of statistical modelling approaches that use prior biological information for evaluating the collective action......Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non...... regions and gene ontologies) that provide better model fit and increase predictive ability of the statistical model for this trait....

  14. Towards mastering CRISPR-induced gene knock-in in plants: Survey of key features and focus on the model Physcomitrella patens.

    Science.gov (United States)

    Collonnier, Cécile; Guyon-Debast, Anouchka; Maclot, François; Mara, Kostlend; Charlot, Florence; Nogué, Fabien

    2017-05-15

    Beyond its predominant role in human and animal therapy, the CRISPR-Cas9 system has also become an essential tool for plant research and plant breeding. Agronomic applications rely on the mastery of gene inactivation and gene modification. However, if the knock-out of genes by non-homologous end-joining (NHEJ)-mediated repair of the targeted double-strand breaks (DSBs) induced by the CRISPR-Cas9 system is rather well mastered, the knock-in of genes by homology-driven repair or end-joining remains difficult to perform efficiently in higher plants. In this review, we describe the different approaches that can be tested to improve the efficiency of CRISPR-induced gene modification in plants, which include the use of optimal transformation and regeneration protocols, the design of appropriate guide RNAs and donor templates and the choice of nucleases and means of delivery. We also present what can be done to orient DNA repair pathways in the target cells, and we show how the moss Physcomitrella patens can be used as a model plant to better understand what DNA repair mechanisms are involved, and how this knowledge could eventually be used to define more performant strategies of CRISPR-induced gene knock-in. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Cycling hypoxia: A key feature of the tumor microenvironment.

    Science.gov (United States)

    Michiels, Carine; Tellier, Céline; Feron, Olivier

    2016-08-01

    A compelling body of evidence indicates that most human solid tumors contain hypoxic areas. Hypoxia is the consequence not only of the chaotic proliferation of cancer cells that places them at distance from the nearest capillary but also of the abnormal structure of the new vasculature network resulting in transient blood flow. Hence two types of hypoxia are observed in tumors: chronic and cycling (intermittent) hypoxia. Most of the current work aims at understanding the role of chronic hypoxia in tumor growth, response to treatment and metastasis. Only recently, cycling hypoxia, with spatial and temporal fluctuations in oxygen levels, has emerged as another key feature of the tumor environment that triggers different responses in comparison to chronic hypoxia. Either type of hypoxia is associated with distinct effects not only in cancer cells but also in stromal cells. In particular, cycling hypoxia has been demonstrated to favor, to a higher extent than chronic hypoxia, angiogenesis, resistance to anti-cancer treatments, intratumoral inflammation and tumor metastasis. These review details these effects as well as the signaling pathway it triggers to switch on specific transcriptomic programs. Understanding the signaling pathways through which cycling hypoxia induces these processes that support the development of an aggressive cancer could convey to the emergence of promising new cancer treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Key enabling design features of the ITER HNB Duct Liner

    Energy Technology Data Exchange (ETDEWEB)

    Chuilon, Ben, E-mail: ben.chuilon@ccfe.ac.uk; Mistry, Sanjay; Andrews, Rodney; Verhoeven, Roel; Xue, Yongkuan

    2015-10-15

    Highlights: • Key engineering design details of the ITER HND Duct Liner are presented. • A standardised CuCrZr water cooled panel that can be remotely handled is detailed. • Bolts are protected from beam power by means of a tungsten cap to radiate heat away. • Water connections placed coaxially are protected from beam power by a tungsten ring. • Explosion-bonded CuCrZr-316L panels result in a tenfold disruption torque reduction. - Abstract: The Duct Liner (DL) for the ITER Heating Neutral Beam (HNB) is a key component in the beam transport system. Duct Liners installed into equatorial ports 4 and 5 of the Vacuum Vessel (VV) will protect the port extension from power deposition due to re-ionisation and direct interception of the HNB. Furthermore, the DL contributes towards the shielding of the VV and superconducting coils from plasma photons and neutrons. The DL incorporates a 316L(N)-IG, deep-drilled and water cooled Neutron Shield (NS) whose internal walls are lined with actively cooled CuCrZr Duct Liner Modules (DLMs). These Remote Handling Class 2 and 3 panels provide protection from neutral beam power. This paper provides an overview of the preliminary design for the ITER HNB DL and focusses on critical features that ensure compatibility with: high heat flux requirements, remote maintenance procedures, and transient magnetic fields arising from major plasma disruptions. The power deposited on a single DLM can reach 300 kW with a peak power density of 2.4 MW/m{sup 2}. Feeding coolant to the DLMs is accomplished via welded connections to the internal coolant network of the NS. These are placed coaxially to allow for thermal expansion of the DLMs without the use of deformable connections. Critically, the remote maintenance of individual DLMs necessitates access to water connections and bolts from the beam facing surface, thus subjecting them to high heat flux loads. This design challenge will become more prevalent as fusion devices become more powerful

  17. Key Clinical Features to Identify Girls with "CDKL5" Mutations

    Science.gov (United States)

    Bahi-Buisson, Nadia; Nectoux, Juliette; Rosas-Vargas, Haydee; Milh, Mathieu; Boddaert, Nathalie; Girard, Benoit; Cances, Claude; Ville, Dorothee; Afenjar, Alexandra; Rio, Marlene; Heron, Delphine; Morel, Marie Ange N'Guyen; Arzimanoglou, Alexis; Philippe, Christophe; Jonveaux, Philippe; Chelly, Jamel; Bienvenu, Thierry

    2008-01-01

    Mutations in the human X-linked cyclin-dependent kinase-like 5 ("CDKL5") gene have been shown to cause infantile spasms as well as Rett syndrome (RTT)-like phenotype. To date, less than 25 different mutations have been reported. So far, there are still little data on the key clinical diagnosis criteria and on the natural history of…

  18. Fabled IBM Tank nears launch without key features

    CERN Multimedia

    2003-01-01

    "IBM is preparing to roll out the TotalStorage SAN File System, the ballyhooed, renamed, much delayed Storage Tank the company's been working on for ages, although it now appears some of its key capabilities won't appear until next year in a later version" (1 page).

  19. The idiopathic interstitial pneumonias: understanding key radiological features

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, S. [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom); Benamore, R., E-mail: Rachel.Benamore@orh.nhs.u [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom)

    2010-10-15

    Many radiologists find it challenging to distinguish between the different interstitial idiopathic pneumonias (IIPs). The British Thoracic Society guidelines on interstitial lung disease (2008) recommend the formation of multidisciplinary meetings, with diagnoses made by combined radiological, pathological, and clinical findings. This review focuses on understanding typical and atypical radiological features on high-resolution computed tomography between the different IIPs, to help the radiologist determine when a confident diagnosis can be made and how to deal with uncertainty.

  20. The idiopathic interstitial pneumonias: understanding key radiological features

    International Nuclear Information System (INIS)

    Dixon, S.; Benamore, R.

    2010-01-01

    Many radiologists find it challenging to distinguish between the different interstitial idiopathic pneumonias (IIPs). The British Thoracic Society guidelines on interstitial lung disease (2008) recommend the formation of multidisciplinary meetings, with diagnoses made by combined radiological, pathological, and clinical findings. This review focuses on understanding typical and atypical radiological features on high-resolution computed tomography between the different IIPs, to help the radiologist determine when a confident diagnosis can be made and how to deal with uncertainty.

  1. Key features and progress of the KSTAR tokamak engineering

    International Nuclear Information System (INIS)

    Bak, J.S.; Choi, C.H.; Oh, Y.K.

    2003-01-01

    Substantial progress of the KSTAR tokamak engineering has been made on major tokamak structures, superconducting magnets, in-vessel components, diagnostic system, heating system, and power supplies. The engineering design has been elaborated to the extent necessary to allow a realistic assessment of its feasibility, performance, and cost. The prototype fabrication has been carried out to establish the reliable fabrication technologies and to confirm the validation of analyses employed for the KSTAR design. The completion of experimental building with beneficial occupancy for machine assembly was accomplished in Sep. 2002. The construction of special utility such as cryo-plant, de-ionized water-cooling system, and main power station will begin upon completion of building construction. The commissioning, construction, fabrication, and assembly of the whole facility will be going on by the end of 2005. This paper describes the main design features and engineering progress of the KSTAR tokamak, and elaborates the work currently underway. (author)

  2. Key features of intertidal food webs that support migratory shorebirds.

    Directory of Open Access Journals (Sweden)

    Blanche Saint-Béat

    Full Text Available The migratory shorebirds of the East Atlantic flyway land in huge numbers during a migratory stopover or wintering on the French Atlantic coast. The Brouage bare mudflat (Marennes-Oléron Bay, NE Atlantic is one of the major stopover sites in France. The particular structure and function of a food web affects the efficiency of carbon transfer. The structure and functioning of the Brouage food web is crucial for the conservation of species landing within this area because it provides sufficient food, which allows shorebirds to reach the north of Europe where they nest. The aim of this study was to describe and understand which food web characteristics support nutritional needs of birds. Two food-web models were constructed, based on in situ measurements that were made in February 2008 (the presence of birds and July 2008 (absence of birds. To complete the models, allometric relationships and additional data from the literature were used. The missing flow values of the food web models were estimated by Monte Carlo Markov Chain--Linear Inverse Modelling. The flow solutions obtained were used to calculate the ecological network analysis indices, which estimate the emergent properties of the functioning of a food-web. The total activities of the Brouage ecosystem in February and July are significantly different. The specialisation of the trophic links within the ecosystem does not appear to differ between the two models. In spite of a large export of carbon from the primary producer and detritus in winter, the higher recycling leads to a similar retention of carbon for the two seasons. It can be concluded that in February, the higher activity of the ecosystem coupled with a higher cycling and a mean internal organization, ensure the sufficient feeding of the migratory shorebirds.

  3. Preliminary safety analysis for key design features of KALIMER

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, D. H.; Kwon, Y. M.; Chang, W. P.; Suk, S. D.; Lee, S. O.; Lee, Y. B.; Jeong, K. S

    2000-07-01

    KAERI is currently developing the conceptual design of a liquid metal reactor, KALIMER(Korea Advanced Liquid Metal Reactor) under the long-term nuclear R and D program. In this report, descriptions of the KALIMER safety design features and safety analyses results for selected ATWS accidents are presented. First, the basic approach to achieve the safety goal is introduced in chapter 1, and the safety evaluation procedure for the KALIMER design is described in chapter 2. It includes event selection, event categorization, description of design basis events, and beyond design basis events. In chapter 3, results of inherent safety evaluations for the KALIMER conceptual design are presented. The KALIMER core and plant system are designed to assure design performance during a selected set of events without either reactor control or protection system intervention. Safety analyses for the postulated anticipated transient without scram(ATWS) have been performed to investigate the KALIMER system response to the events. They are categorized as bounding events(BEs) because of their low probability of occurrence. In chapter 4, the design of the KALIMER containment dome and the results of its performance analysis are presented. The designs of the existing LMR containment and the KALIMER containment dome have been compared in this chapter. Procedure of the containment performance analysis and the analysis results are described along with the accident scenario and source terms. Finally, a simple methodology is introduced to investigate the core kinetics and hydraulic behavior during HCDA in chapter 5. Mathematical formulations have been developed in the framework of the modified bethe-tait method, and scoping analyses have been performed for the KALIMER core behavior during super-prompt critical excursions.

  4. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  5. Soil fauna: key to new carbon models

    OpenAIRE

    Filser, Juliane; Faber, Jack H.; Tiunov, Alexei V.; Brussaard, Lijbert; Frouz, Jan; Deyn, Gerlinde; Uvarov, Alexei V.; Berg, Matty P.; Lavelle, Patrick; Loreau, Michel; Wall, Diana H.; Querner, Pascal; Eijsackers, Herman; Jiménez, Juan José

    2016-01-01

    Soil organic matter (SOM) is key to maintaining soil fertility, mitigating climate change, combatting land degradation, and conserving above- and below-ground biodiversity and associated soil processes and ecosystem services. In order to derive management options for maintaining these essential services provided by soils, policy makers depend on robust, predictive models identifying key drivers of SOM dynamics. Existing SOM models and suggested guidelines for future SOM modelling are defined ...

  6. Research on Digital Product Modeling Key Technologies of Digital Manufacturing

    Institute of Scientific and Technical Information of China (English)

    DING Guoping; ZHOU Zude; HU Yefa; ZHAO Liang

    2006-01-01

    With the globalization and diversification of the market and the rapid development of Information Technology (IT) and Artificial Intelligence (AI), the digital revolution of manufacturing is coming. One of the key technologies in digital manufacturing is product digital modeling. This paper firstly analyzes the information and features of the product digital model during each stage in the product whole lifecycle, then researches on the three critical technologies of digital modeling in digital manufacturing-product modeling, standard for the exchange of product model data and digital product data management. And the potential signification of the product digital model during the process of digital manufacturing is concluded-product digital model integrates primary features of each stage during the product whole lifecycle based on graphic features, applies STEP as data exchange mechanism, and establishes PDM system to manage the large amount, complicated and dynamic product data to implement the product digital model data exchange, sharing and integration.

  7. Key-Feature-Probleme zum Prüfen von prozeduralem Wissen: Ein Praxisleitfaden [Key Feature Problems for the assessment of procedural knowledge: a practical guide

    Directory of Open Access Journals (Sweden)

    Kopp, Veronika

    2006-08-01

    Full Text Available [english] After assigning the different examination formats to the diverse terms of Miller's pyramide of knowledge, this paper provides a short presentation of the key feature approach by giving the definition and an example for clarification. Afterwards, a practical guide to writing key feature problems is given consisting of the following steps: define the domain, choose a clinical situation, define the key features, develop a test case scenario, write questions, select a preferred response format, define the scoring key, and validation. Finally, we present the evaluation results of this practical guide. In sum, the participants were very pleased with it. The differences between the estimations of their knowledge before and after the workshop concerning key features were significant. The key feature approach is an innovative tool for assessing clinical decision-making skills, also for electronical examinations. Substituting the write-in format for the long-menu format allows an automatic data analysis. [german] Im vorliegenden Beitrag wird - nach der Zuordnung unterschiedlicher Prüfungsformen zu den verschiedenen Wissensarten der Wissenspyramide von Miller - der Key-Feature (KF Ansatz vorgestellt. Nachdem anhand der Definition und einem Beispiel erklärt wurde, was ein KF ist, wird im Anschluss eine Anleitung für die Erstellung eines KF-Problems gegeben. Diese besteht aus folgenden Schritten: Definition des Kontextes, Wahl der klinischen Situation, Identifikation der KFs des klinischen Problems, Schreiben des klinischen Szenarios (Fallvignette, Schreiben der einzelnen KF-Fragen, Auswahl des Antwortformates, Bewertungsverfahren und Inhaltsvalidierung. Am Ende werden die Ergebnisse einer Evaluation dieser Anleitung, die im Rahmen eines KF-Workshops gewonnen wurden, präsentiert. Die Teilnehmer waren mit dieser Workshopeinheit sehr zufrieden und gaben an, sehr viel gelernt zu haben. Die subjektive Einschätzung ihres Wissensstands vor und nach

  8. Time to refine key climate policy models

    Science.gov (United States)

    Barron, Alexander R.

    2018-05-01

    Ambition regarding climate change at the national level is critical but is often calibrated with the projected costs — as estimated by a small suite of energy-economic models. Weaknesses in several key areas in these models will continue to distort policy design unless collectively addressed by a diversity of researchers.

  9. Soil fauna: key to new carbon models

    Science.gov (United States)

    Filser, Juliane; Faber, Jack H.; Tiunov, Alexei V.; Brussaard, Lijbert; Frouz, Jan; De Deyn, Gerlinde; Uvarov, Alexei V.; Berg, Matty P.; Lavelle, Patrick; Loreau, Michel; Wall, Diana H.; Querner, Pascal; Eijsackers, Herman; José Jiménez, Juan

    2016-11-01

    Soil organic matter (SOM) is key to maintaining soil fertility, mitigating climate change, combatting land degradation, and conserving above- and below-ground biodiversity and associated soil processes and ecosystem services. In order to derive management options for maintaining these essential services provided by soils, policy makers depend on robust, predictive models identifying key drivers of SOM dynamics. Existing SOM models and suggested guidelines for future SOM modelling are defined mostly in terms of plant residue quality and input and microbial decomposition, overlooking the significant regulation provided by soil fauna. The fauna controls almost any aspect of organic matter turnover, foremost by regulating the activity and functional composition of soil microorganisms and their physical-chemical connectivity with soil organic matter. We demonstrate a very strong impact of soil animals on carbon turnover, increasing or decreasing it by several dozen percent, sometimes even turning C sinks into C sources or vice versa. This is demonstrated not only for earthworms and other larger invertebrates but also for smaller fauna such as Collembola. We suggest that inclusion of soil animal activities (plant residue consumption and bioturbation altering the formation, depth, hydraulic properties and physical heterogeneity of soils) can fundamentally affect the predictive outcome of SOM models. Understanding direct and indirect impacts of soil fauna on nutrient availability, carbon sequestration, greenhouse gas emissions and plant growth is key to the understanding of SOM dynamics in the context of global carbon cycling models. We argue that explicit consideration of soil fauna is essential to make realistic modelling predictions on SOM dynamics and to detect expected non-linear responses of SOM dynamics to global change. We present a decision framework, to be further developed through the activities of KEYSOM, a European COST Action, for when mechanistic SOM models

  10. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  11. Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis

    Science.gov (United States)

    Kerr, Deirdre; Chung, Gregory K. W. K.

    2012-01-01

    The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…

  12. Salient Key Features of Actual English Instructional Practices in Saudi Arabia

    Science.gov (United States)

    Al-Seghayer, Khalid

    2015-01-01

    This is a comprehensive review of the salient key features of the actual English instructional practices in Saudi Arabia. The goal of this work is to gain insights into the practices and pedagogic approaches to English as a foreign language (EFL) teaching currently employed in this country. In particular, we identify the following central features…

  13. Some key features in the evolution of self psychology and psychoanalysis.

    Science.gov (United States)

    Fosshage, James L

    2009-04-01

    Psychoanalysis, as every science and its application, has continued to evolve over the past century, especially accelerating over the last 30 years. Self psychology has played a constitutive role in that evolution and has continued to change itself. These movements have been supported and augmented by a wide range of emergent research and theory, especially that of cognitive psychology, infant and attachment research, rapid eye movement and dream research, psychotherapy research, and neuroscience. I present schematically some of what I consider to be the key features of the evolution of self psychology and their interconnection with that of psychoanalysis at large, including the revolutionary paradigm changes, the new epistemology, listening/experiencing perspectives, from narcissism to the development of the self, the new organization model of transference, the new organization model of dreams, and the implicit and explicit dimensions of analytic work. I conclude with a focus on the radical ongoing extension of the analyst's participation in the analytic relationship, using, as an example, the co-creation of analytic love, and providing several brief clinical illustrations. The leading edge question guiding my discussion is "How does analytic change occur?"

  14. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    Science.gov (United States)

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep

  15. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  16. Identifying Key Features of Effective Active Learning: The Effects of Writing and Peer Discussion

    Science.gov (United States)

    Pangle, Wiline M.; Wyatt, Kevin H.; Powell, Karli N.; Sherwood, Rachel E.

    2014-01-01

    We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. PMID:25185230

  17. Key Features of Political Advertising as an Independent Type of Advertising Communication

    Directory of Open Access Journals (Sweden)

    Svetlana Anatolyevna Chubay

    2015-09-01

    Full Text Available To obtain the most complete understanding of the features of political advertising, the author characterizes its specific features allocated by modern researchers. The problem of defining the notion of political advertising is studied in detail. The analysis of definitions available in professional literature has allowed the author to identify a number of key features that characterize political advertising as an independent type of promotional activity. These features include belonging to the forms of mass communication, implemented through different communication channels; the presence of characteristics typical of any advertising as a form of mass communication (strategies and concepts promoting the program, ideas; an integrated approach to the selection of communication channels, means and the methods of informing the addressers that focus on the audience; the formation of psychological attitude to voting; the image nature; the manipulative potential. It is shown that the influence is the primary function of political advertising – it determines the key characteristics common to this type of advertising. Political advertising, reflecting the essence of the political platform of certain political forces, setting up voters for their support, forming and introducing into the mass consciousness a definite idea of the character of these political forces, creates the desired psychological attitude to the voting. The analysis of definitions available in professional literature has allowed the author to formulate an operational definition of political advertising, which allowed to include the features that distinguish political advertising from other forms of political communication such as political PR which is traditionally mixed with political advertising.

  18. Key Features of Electric Vehicle Diffusion and Its Impact on the Korean Power Market

    Directory of Open Access Journals (Sweden)

    Dongnyok Shim

    2018-06-01

    Full Text Available The market share of electric vehicles is growing and the interest in these vehicles is rapidly increasing in industrialized countries. In the light of these circumstances, this study provides an integrated policy-making package, which includes key features for electric vehicle diffusion and its impact on the Korean power market. This research is based on a quantitative analysis with the following steps: (1 it analyzes drivers’ preferences for electric or traditional internal combustion engine (ICE vehicles with respect to key automobile attributes and these key attributes indicate what policy makers should focus on; (2 it forecasts the achievable level of market share of electric vehicles in relation to improvements in their key attributes; and (3 it evaluates the impact of electric vehicle diffusion on the Korean power market based on an achievable level of market share with different charging demand profiles. Our results reveal the market share of electric vehicles can increase to around 40% of the total market share if the key features of electric vehicles reach a similar level to those of traditional vehicles. In this estimation, an increase in the power market’s system generation costs will reach around 10% of the cost in the baseline scenario, which differs slightly depending on charging demand profiles.

  19. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  20. Stego Keys Performance on Feature Based Coding Method in Text Domain

    Directory of Open Access Journals (Sweden)

    Din Roshidi

    2017-01-01

    Full Text Available A main critical factor on embedding process in any text steganography method is a key used known as stego key. This factor will be influenced the success of the embedding process of text steganography method to hide a message from third party or any adversary. One of the important aspects on embedding process in text steganography method is the fitness performance of the stego key. Three parameters of the fitness performance of the stego key have been identified such as capacity ratio, embedded fitness ratio and saving space ratio. It is because a better as capacity ratio, embedded fitness ratio and saving space ratio offers of any stego key; a more message can be hidden. Therefore, main objective of this paper is to analyze three features coding based namely CALP, VERT and QUAD of stego keys in text steganography on their capacity ratio, embedded fitness ratio and saving space ratio. It is found that CALP method give a good effort performance compared to VERT and QUAD methods.

  1. Cemento-osseous dysplasia of the jaw bones: key radiographic features.

    Science.gov (United States)

    Alsufyani, N A; Lam, E W N

    2011-03-01

    The purpose of this study is to assess possible diagnostic differences between general dentists (GPs) and oral and maxillofacial radiologists (RGs) in the identification of pathognomonic radiographic features of cemento-osseous dysplasia (COD) and its interpretation. Using a systematic objective survey instrument, 3 RGs and 3 GPs reviewed 50 image sets of COD and similarly appearing entities (dense bone island, cementoblastoma, cemento-ossifying fibroma, fibrous dysplasia, complex odontoma and sclerosing osteitis). Participants were asked to identify the presence or absence of radiographic features and then to make an interpretation of the images. RGs identified a well-defined border (odds ratio (OR) 6.67, P < 0.05); radiolucent periphery (OR 8.28, P < 0.005); bilateral occurrence (OR 10.23, P < 0.01); mixed radiolucent/radiopaque internal structure (OR 10.53, P < 0.01); the absence of non-concentric bony expansion (OR 7.63, P < 0.05); and the association with anterior and posterior teeth (OR 4.43, P < 0.05) as key features of COD. Consequently, RGs were able to correctly interpret 79.3% of COD cases. In contrast, GPs identified the absence of root resorption (OR 4.52, P < 0.05) and the association with anterior and posterior teeth (OR 3.22, P = 0.005) as the only key features of COD and were able to correctly interpret 38.7% of COD cases. There are statistically significant differences between RGs and GPs in the identification and interpretation of the radiographic features associated with COD (P < 0.001). We conclude that COD is radiographically discernable from other similarly appearing entities only if the characteristic radiographic features are correctly identified and then correctly interpreted.

  2. Identifying key features of effective active learning: the effects of writing and peer discussion.

    Science.gov (United States)

    Linton, Debra L; Pangle, Wiline M; Wyatt, Kevin H; Powell, Karli N; Sherwood, Rachel E

    2014-01-01

    We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. © 2014 D. L. Linton et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  3. Preliminary safety analysis for key design features of KALIMER with breakeven core

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Do Hee; Kwon, Y. M.; Chang, W. P.; Suk, S. D.; Lee, Y. B.; Jeong, K. S

    2001-06-01

    KAERI is currently developing the conceptual design of a Liquid Metal Reactor, KALIMER (Korea Advanced Liquid MEtal Reactor) under the Long-term Nuclear R and D Program. KALIMER addresses key issues regarding future nuclear power plants such as plant safety, economics, proliferation, and waste. In this report, descriptions of safety design features and safety analyses results for selected ATWS accidents for the breakeven core KALIMER are presented. First, the basic approach to achieve the safety goal is introduced in Chapter 1, and the safety evaluation procedure for the KALIMER design is described in Chapter 2. It includes event selection, event categorization, description of design basis events, and beyond design basis events.In Chapter 3, results of inherent safety evaluations for the KALIMER conceptual design are presented. The KALIMER core and plant system are designed to assure benign performance during a selected set of events without either reactor control or protection system intervention. Safety analyses for the postulated anticipated transient without scram (ATWS) have been performed to investigate the KALIMER system response to the events. In Chapter 4, the design of the KALIMER containment dome and the results of its performance analyses are presented. The design of the existing containment and the KALIMER containment dome are compared in this chapter. Procedure of the containment performance analysis and the analysis results are described along with the accident scenario and source terms. Finally, a simple methodology is introduced to investigate the core energetics behavior during HCDA in Chapter 5. Sensitivity analyses have been performed for the KALIMER core behavior during super-prompt critical excursions, using mathematical formulations developed in the framework of the Modified Bethe-Tait method. Work energy potential was then calculated based on the isentropic fuel expansion model.

  4. The key design features of the Indian advanced heavy water reactor

    International Nuclear Information System (INIS)

    Sinha, R.K.; Kakodkar, A.; Anand, A.K.; Venkat Raj, V.; Balakrishnan, K.

    1999-01-01

    The 235 MWe Indian Advanced Heavy Water Reactor (AHWR) is a vertical, pressure tube type, boiling light water cooled reactor. The three key specific features of design of the AHWR, having a large impact on its viability, safety and economics, relate to its reactor physics, coolant channel, and passive safety features. The reactor physics design is tuned for maximising use of thorium based fuel, and achieving a slightly negative void coefficient of reactivity. The fulfilment of these requirements has been possible through use of PuO 2 -ThO 2 MOX, and ThO 2 -U 233 O 2 MOX in different pins of the same fuel cluster, and use of a heterogeneous moderator consisting of pyrolytic carbon and heavy water in 80%-20% volume ratio. The coolant channels of AHWR are designed for easy replaceability of pressure tubes, during normal maintenance shutdowns. The removal of pressure tube along with bottom end-fitting, using rolled joint detachment technology, can be done in AHWR coolant channels without disturbing the top end-fitting, tail pipe and feeder connections, and all other appendages of the coolant channel. The AHWR incorporates several passive safety features. These include core heat removal through natural circulation, direct injection of Emergency Core Coolant System (ECCS) water in fuel, passive systems for containment cooling and isolation, and availability of a large inventory of borated water in overhead Gravity Driven Water Pool (GDWP) to facilitate sustenance of core decay heat removal, ECCS injection, and containment cooling for three days without invoking any active systems or operator action. Incorporation of these features has been done together with considerable design simplifications, and elimination of several reactor grade equipment. A rigorous evaluation of feasibility of AHWR design concept has been completed. The economy enhancing aspects of its key design features are expected to compensate for relative complexity of the thorium fuel cycle activities

  5. Multiple Information Fusion Face Recognition Using Key Feature Points

    Directory of Open Access Journals (Sweden)

    LIN Kezheng

    2017-06-01

    Full Text Available After years of face recognition research,due to the effect of illumination,noise and other conditions have led to the recognition rate is relatively low,2 d face recognition technology has couldn’t keep up with the pace of The Times the forefront,Although 3 d face recognition technology is developing step by step,but it has a higher complexity. In order to solve this problem,based on the traditional depth information positioning method and local characteristic analysis methods LFA,puts forward an improved 3 d face key feature points localization algorithm, and on the basis of the trained sample which obtained by complete cluster,further put forward the global and local feature extraction algorithm of weighted fusion. Through FRGC and BU-3DFE experiment data comparison and analysis of the two face library,the method in terms of 3 d face recognition effect has a higher robustness.

  6. Discrete Feature Model (DFM) User Documentation

    International Nuclear Information System (INIS)

    Geier, Joel

    2008-06-01

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this software, the

  7. Discrete Feature Model (DFM) User Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2008-06-15

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this

  8. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  9. Soil fauna: key to new carbon models

    NARCIS (Netherlands)

    Filser, Juliane; Faber, J.H.; Tiunov, Alexei V.; Brussaard, L.; Frouz, J.; Deyn, de G.B.; Uvarov, Alexei V.; Berg, Matty P.; Lavelle, Patrick; Loreau, M.; Wall, D.H.; Querner, Pascal; Eijsackers, Herman; Jimenez, Juan Jose

    2016-01-01

    Soil organic matter (SOM) is key to maintaining soil fertility, mitigating climate change, combatting land degradation, and conserving above- and below-ground biodiversity and associated soil processes and ecosystem services. In order to derive management options for maintaining these essential

  10. Key-feature questions for assessment of clinical reasoning: a literature review.

    Science.gov (United States)

    Hrynchak, Patricia; Takahashi, Susan Glover; Nayer, Marla

    2014-09-01

    Key-feature questions (KFQs) have been developed to assess clinical reasoning skills. The purpose of this paper is to review the published evidence on the reliability and validity of KFQs to assess clinical reasoning. A literature review was conducted by searching MEDLINE (1946-2012) and EMBASE (1980-2012) via OVID and ERIC. The following search terms were used: key feature; question or test or tests or testing or tested or exam; assess or evaluation, and case-based or case-specific. Articles not in English were eliminated. The literature search resulted in 560 articles. Duplicates were eliminated, as were articles that were not relevant; nine articles that contained reliability or validity data remained. A review of the references and of citations of these articles resulted in an additional 12 articles to give a total of 21 for this review. Format, language and scoring of KFQ examinations have been studied and modified to maximise reliability. Internal consistency reliability has been reported as being between 0.49 and 0.95. Face and content validity have been shown to be moderate to high. Construct validity has been shown to be good using vector thinking processes and novice versus expert paradigms, and to discriminate between teaching methods. The very modest correlations between KFQ examinations and more general knowledge-based examinations point to differing roles for each. Importantly, the results of KFQ examinations have been shown to successfully predict future physician performance, including patient outcomes. Although it is inaccurate to conclude that any testing format is universally reliable or valid, published research supports the use of examinations using KFQs to assess clinical reasoning. The review identifies areas of further study, including all categories of evidence. Investigation into how examinations using KFQs integrate with other methods in a system of assessment is needed. © 2014 John Wiley & Sons Ltd.

  11. Safety analysis for key design features of KALIMER-600 design concept

    International Nuclear Information System (INIS)

    Lee, Yong-Bum; Kwon, Y. M.; Kim, E. K.; Suk, S. D.; Chang, W. P.; Joeng, H. Y.; Ha, K. S.; Heo, S.

    2005-03-01

    KAERI is developing the conceptual design of a Liquid Metal Reactor, KALIMER-600 (Korea Advanced LIquid MEtal Reactor) under the Long-term Nuclear R and D Program. KALIMER-600 addresses key issues regarding future nuclear power plants such as plant safety, economics, proliferation, and waste. In this report, key safety design features are described and safety analyses results for typical ATWS accidents, containment design basis accidents, and flow blockages in the KALIMER design are presented. First, the basic approach to achieve the safety goal and main design features of KALIMER-600 are introduced in Chapter 1, and the event categorization and acceptance criteria for the KALIMER-600 safety analysis are described in Chapter 2, In Chapter 3, results of inherent safety evaluations for the KALIMER-600 conceptual design are presented. The KALIMER-600 core and plant system are designed to assure benign performance during a selected set of events without either reactor control or protection system intervention. Safety analyses for the postulated anticipated transient without scram (ATWS) have been performed using the SSC-K code to investigate the KALIMER-600 system response to the events. The objectives of Chapter 4, are to assess the response of KALIMER-600 containment to the design basis accidents and to evaluate whether the consequences are acceptable or not in the aspect of structural integrity and the exposure dose rate. In Chapter 5, the analysis of flow blockage for KALIMER-600 with the MATRA-LMR-FB code, which has been developed for the internal flow blockage in a LMR subassembly, are described. The cases with a blockage of 6-subchannel, 24-subchannel, and 54-subchannel are analyzed

  12. Qualitative research methods: key features and insights gained from use in infection prevention research.

    Science.gov (United States)

    Forman, Jane; Creswell, John W; Damschroder, Laura; Kowalski, Christine P; Krein, Sarah L

    2008-12-01

    Infection control professionals and hospital epidemiologists are accustomed to using quantitative research. Although quantitative studies are extremely important in the field of infection control and prevention, often they cannot help us explain why certain factors affect the use of infection control practices and identify the underlying mechanisms through which they do so. Qualitative research methods, which use open-ended techniques, such as interviews, to collect data and nonstatistical techniques to analyze it, provide detailed, diverse insights of individuals, useful quotes that bring a realism to applied research, and information about how different health care settings operate. Qualitative research can illuminate the processes underlying statistical correlations, inform the development of interventions, and show how interventions work to produce observed outcomes. This article describes the key features of qualitative research and the advantages that such features add to existing quantitative research approaches in the study of infection control. We address the goal of qualitative research, the nature of the research process, sampling, data collection and analysis, validity, generalizability of findings, and presentation of findings. Health services researchers are increasingly using qualitative methods to address practical problems by uncovering interacting influences in complex health care environments. Qualitative research methods, applied with expertise and rigor, can contribute important insights to infection prevention efforts.

  13. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  14. Key West, Florida Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Key West, Florida Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST...

  15. Feature and Meta-Models in Clafer: Mixed, Specialized, and Coupled

    DEFF Research Database (Denmark)

    Bąk, Kacper; Czarnecki, Krzysztof; Wasowski, Andrzej

    2011-01-01

    constraints (such as mapping feature configurations to component configurations or model templates). Clafer also allows arranging models into multiple specialization and extension layers via constraints and inheritance. We identify four key mechanisms allowing a meta-modeling language to express feature...

  16. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    Directory of Open Access Journals (Sweden)

    A. Huschauer

    2017-06-01

    Full Text Available Following a successful commissioning period, the multiturn extraction (MTE at the CERN Proton Synchrotron (PS has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  17. Bile Routing Modification Reproduces Key Features of Gastric Bypass in Rat.

    Science.gov (United States)

    Goncalves, Daisy; Barataud, Aude; De Vadder, Filipe; Vinera, Jennifer; Zitoun, Carine; Duchampt, Adeline; Mithieux, Gilles

    2015-12-01

    To evaluate the role of bile routing modification on the beneficial effects of gastric bypass surgery on glucose and energy metabolism. Gastric bypass surgery (GBP) promotes early improvements in glucose and energy homeostasis in obese diabetic patients. A suggested mechanism associates a decrease in hepatic glucose production to an enhanced intestinal gluconeogenesis. Moreover, plasma bile acids are elevated after GBP and bile acids are inhibitors of gluconeogenesis. In male Sprague-Dawley rats, we performed bile diversions from the bile duct to the midjejunum or the mid-ileum to match the modified bile delivery in the gut occurring in GBP. Body weight, food intake, glucose tolerance, insulin sensitivity, and food preference were analyzed. The expression of gluconeogenesis genes was evaluated in both the liver and the intestine. Bile diversions mimicking GBP promote an increase in plasma bile acids and a marked improvement in glucose control. Bile bioavailability modification is causal because a bile acid sequestrant suppresses the beneficial effects of bile diversions on glucose control. In agreement with the inhibitory role of bile acids on gluconeogenesis, bile diversions promote a blunting in hepatic glucose production, whereas intestinal gluconeogenesis is increased in the gut segments devoid of bile. In rats fed a high-fat-high-sucrose diet, bile diversions improve glucose control and dramatically decrease food intake because of an acquired disinterest in fatty food. This study shows that bile routing modification is a key mechanistic feature in the beneficial outcomes of GBP.

  18. Safety Analysis for Key Design Features of KALIMER-600 Design Concept

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Kwon, Y. M.; Kim, E. K.; Suk, S. D.; Chang, W. P.; Jeong, H. Y.; Ha, K. S

    2007-02-15

    This report contains the safety analyses of the KALIMER-600 conceptual design which KAERI has been developing under the Long-term Nuclear R and D Program. The analyses have been performed reflecting the design developments during the second year of the 4th design phase in the program. The specific presentations are the key design features with the safety principles for achieving the safety objectives, the event categorization and safety criteria, and results on the safety analyses for the DBAs and ATWS events, the containment performance, and the channel blockages. The safety analyses for both the DBAs and ATWS events have been performed using SSC-K version 1.3., and the results have shown the fulfillment of the safety criteria for DBAs with conservative assumptions. The safety margins as well as the inherent safety also have been confirmed for the ATWS events. For the containment performance analysis, ORIGEN-2.1 and CONTAIN-LMR have been used. In results, the structural integrity has been acceptable and the evaluated exposure dose rate has been complied with 10 CFR 100 and PAG limits. The analysis results for flow blockages of 6-subchannels, 24-subchannels, and 54- subchannels with the MATRA-LMR-FB code, have assured the integrity of subassemblies.

  19. Key Features of Academic Detailing: Development of an Expert Consensus Using the Delphi Method.

    Science.gov (United States)

    Yeh, James S; Van Hoof, Thomas J; Fischer, Michael A

    2016-02-01

    Academic detailing is an outreach education technique that combines the direct social marketing traditionally used by pharmaceutical representatives with unbiased content summarizing the best evidence for a given clinical issue. Academic detailing is conducted with clinicians to encourage evidence-based practice in order to improve the quality of care and patient outcomes. The adoption of academic detailing has increased substantially since the original studies in the 1980s. However, the lack of standard agreement on its implementation makes the evaluation of academic detailing outcomes challenging. To identify consensus on the key elements of academic detailing among a group of experts with varying experiences in academic detailing. This study is based on an online survey of 20 experts with experience in academic detailing. We used the Delphi process, an iterative and systematic method of developing consensus within a group. We conducted 3 rounds of online surveys, which addressed 72 individual items derived from a previous literature review of 5 features of academic detailing, including (1) content, (2) communication process, (3) clinicians targeted, (4) change agents delivering intervention, and (5) context for intervention. Nonrespondents were removed from later rounds of the surveys. For most questions, a 4-point ordinal scale was used for responses. We defined consensus agreement as 70% of respondents for a single rating category or 80% for dichotomized ratings. The overall survey response rate was 95% (54 of 57 surveys) and nearly 92% consensus agreement on the survey items (66 of 72 items) by the end of the Delphi exercise. The experts' responses suggested that (1) focused clinician education offering support for clinical decision-making is a key component of academic detailing, (2) detailing messages need to be tailored and provide feasible strategies and solutions to challenging cases, and (3) academic detailers need to develop specific skill sets

  20. Orthognathic model surgery with LEGO key-spacer.

    Science.gov (United States)

    Tsang, Alfred Chee-Ching; Lee, Alfred Siu Hong; Li, Wai Keung

    2013-12-01

    A new technique of model surgery using LEGO plates as key-spacers is described. This technique requires less time to set up compared with the conventional plaster model method. It also retains the preoperative setup with the same set of models. Movement of the segments can be measured and examined in detail with LEGO key-spacers. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  2. E-referral Solutions: Successful Experiences, Key Features and Challenges- a Systematic Review.

    Science.gov (United States)

    Naseriasl, Mansour; Adham, Davoud; Janati, Ali

    2015-06-01

    around the world health systems constantly face increasing pressures which arise from many factors, such as an ageing population, patients and providers demands for equipment's and services. In order to respond these challenges and reduction of health system's transactional costs, referral solutions are considered as a key factor. This study was carried out to identify referral solutions that have had successes. relevant studies identified using keywords of referrals, consultation, referral system, referral model, referral project, electronic referral, electronic booking, health system, healthcare, health service and medical care. These searches were conducted using PubMed, ProQuest, Google Scholar, Scopus, Emerald, Web of Knowledge, Springer, Science direct, Mosby's index, SID, Medlib and Iran Doc data bases. 4306 initial articles were obtained and refined step by step. Finally, 27 articles met the inclusion criteria. we identified seventeen e-referral systems developed in UK, Norway, Finland, Netherlands, Denmark, Scotland, New Zealand, Canada, Australia, and U.S. Implemented solutions had variant degrees of successes such as improved access to specialist care, reduced wait times, timeliness and quality of referral communication, accurate health information transfer and integration of health centers and services. each one of referral solutions has both positive and changeable aspects that should be addressed according to sociotechnical conditions. These solutions are mainly formed in a small and localized manner.

  3. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  4. Practical Implementation of Various Public Key Infrastructure Models

    Directory of Open Access Journals (Sweden)

    Dmitriy Anatolievich Melnikov

    2016-03-01

    Full Text Available The paper proposes a short comparative analysis of the contemporary models of public key infrastructure (PKI and the issues of the PKI models real implementation. The Russian model of PKI is presented. Differences between the North American and West Europe models of PKI and Russian model of PKI are described. The problems of creation and main directions of further development and improvement of the Russian PKI and its integration into the global trust environment are defined.

  5. Improving Latino Children's Early Language and Literacy Development: Key Features of Early Childhood Education within Family Literacy Programmes

    Science.gov (United States)

    Jung, Youngok; Zuniga, Stephen; Howes, Carollee; Jeon, Hyun-Joo; Parrish, Deborah; Quick, Heather; Manship, Karen; Hauser, Alison

    2016-01-01

    Noting the lack of research on how early childhood education (ECE) programmes within family literacy programmes influence Latino children's early language and literacy development, this study examined key features of ECE programmes, specifically teacher-child interactions and child engagement in language and literacy activities and how these…

  6. Model of key success factors for Business Intelligence implementation

    Directory of Open Access Journals (Sweden)

    Peter Mesaros

    2016-07-01

    Full Text Available New progressive technologies recorded growth in every area. Information-communication technologies facilitate the exchange of information and it facilitates management of everyday activities in enterprises. Specific modules (such as Business Intelligence facilitate decision-making. Several studies have demonstrated the positive impact of Business Intelligence to decision-making. The first step is to put in place the enterprise. The implementation process is influenced by many factors. This article discusses the issue of key success factors affecting to successful implementation of Business Intelligence. The article describes the key success factors for successful implementation and use of Business Intelligence based on multiple studies. The main objective of this study is to verify the effects and dependence of selected factors and proposes a model of key success factors for successful implementation of Business Intelligence. Key success factors and the proposed model are studied in Slovak enterprises.

  7. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  8. Spatial age-length key modelling using continuation ratio logits

    DEFF Research Database (Denmark)

    Berg, Casper W.; Kristensen, Kasper

    2012-01-01

    -called age-length key (ALK) is then used to obtain the age distribution. Regional differences in ALKs are not uncommon, but stratification is often problematic due to a small number of samples. Here, we combine generalized additive modelling with continuation ratio logits to model the probability of age...

  9. Enhancing Critical Infrastructure and Key Resources (CIKR) Level-0 Physical Process Security Using Field Device Distinct Native Attribute Features

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, Juan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Liefer, Nathan C. [Wright-Patterson AFB, Dayton, OH (United States); Busho, Colin R. [Wright-Patterson AFB, Dayton, OH (United States); Temple, Michael A. [Wright-Patterson AFB, Dayton, OH (United States)

    2017-12-04

    Here, the need for improved Critical Infrastructure and Key Resource (CIKR) security is unquestioned and there has been minimal emphasis on Level-0 (PHY Process) improvements. Wired Signal Distinct Native Attribute (WS-DNA) Fingerprinting is investigated here as a non-intrusive PHY-based security augmentation to support an envisioned layered security strategy. Results are based on experimental response collections from Highway Addressable Remote Transducer (HART) Differential Pressure Transmitter (DPT) devices from three manufacturers (Yokogawa, Honeywell, Endress+Hauer) installed in an automated process control system. Device discrimination is assessed using Time Domain (TD) and Slope-Based FSK (SB-FSK) fingerprints input to Multiple Discriminant Analysis, Maximum Likelihood (MDA/ML) and Random Forest (RndF) classifiers. For 12 different classes (two devices per manufacturer at two distinct set points), both classifiers performed reliably and achieved an arbitrary performance benchmark of average cross-class percent correct of %C > 90%. The least challenging cross-manufacturer results included near-perfect %C ≈ 100%, while the more challenging like-model (serial number) discrimination results included 90%< %C < 100%, with TD Fingerprinting marginally outperforming SB-FSK Fingerprinting; SB-FSK benefits from having less stringent response alignment and registration requirements. The RndF classifier was most beneficial and enabled reliable selection of dimensionally reduced fingerprint subsets that minimize data storage and computational requirements. The RndF selected feature sets contained 15% of the full-dimensional feature sets and only suffered a worst case %CΔ = 3% to 4% performance degradation.

  10. Hypertension Is a Key Feature of the Metabolic Syndrome in Subjects Aging with HIV

    DEFF Research Database (Denmark)

    Martin-Iguacel, Raquel; Negredo, Eugènia; Peck, Robert

    2016-01-01

    to predispose to these metabolic complications and to the excess risk of CVD observed in the HIV population. The metabolic syndrome (MS) represents a clustering of RF for CVD that includes abdominal obesity, hypertension, dyslipidemia and insulin resistance. Hypertension is a prevalent feature of the MS in HIV...

  11. DISTANCE AS KEY FACTOR IN MODELLING STUDENTS’ RECRUITMENT BY UNIVERSITIES

    Directory of Open Access Journals (Sweden)

    SIMONA MĂLĂESCU

    2015-10-01

    Full Text Available Distance as Key Factor in Modelling Students’ Recruitment by Universities. In a previous paper analysing the challenge of keeping up with the current methodologies in the analysis and modelling of students’ recruitment by universities in the case of some ECE countries which still don’t register or develop key data to take advantage from the state of the art knowledge on the domain, we have promised to approach the factor distance in a future work due to the extent of the topic. This paper fulfill that promise bringing a review of the literature especially dealing with modelling the geographical area of recruiting students of an university, where combining distance with the proximate key factors previously reviewed, complete the meta-analysis of existing literature we have started a year ago. Beyond the theoretical benefit from a practical perspective, the metaanalysis aimed at synthesizing elements of good practice that can be applied to the local university system.

  12. Numerical rigid plastic modelling of shear capacity of keyed joints

    DEFF Research Database (Denmark)

    Herfelt, Morten Andersen; Poulsen, Peter Noe; Hoang, Linh Cao

    2015-01-01

    Keyed shear joints are currently designed using simple and conservative design formulas, yet these formulas do not take the local mechanisms in the concrete core of the joint into account. To investigate this phenomenon a rigid, perfectly plastic finite element model of keyed joints is used....... The model is formulated for second-order conic optimisation as a lower bound problem, which yields a statically admissible stress field that satisfies the yield condition in every point. The dual solution to the problem can be interpreted as the collapse mode and will be used to analyse the properties...

  13. Key features of MIR.1200 (AES-2006) design and current stage of Leningrad NPP-2 construction

    International Nuclear Information System (INIS)

    Ivkov, Igor

    2010-01-01

    MIR.1200/AES-2006 is an abbreviated name of the evolving NPP design developed on the basis of the VVER-1000 Russian design with gross operation life of 480 reactor-years. This design is being implemented in four Units of Leningrad NPP-2 (LNPP-2. The AES-91/99 was used as reference during development of the AES-2006 design for LNPP-2; this design was implemented in two Units of Tianwan NPP (China). The main technical features of the MIR.1200/AES-2006 design include a double containment, four trains of active safety systems (4x100%, 4x50%), and special engineering measures for BDBA management (core catcher, H2 PARs, PHRS) based mainly on passive principles. The containment is described in detail, the main features in comparison with the reference NPP are outlined, the design layout principles are highlighted, the safety system structure and parameters are described. Attention is paid to the BDBA management system, hydrogen removal system, core catcher, and PHRS-SG and C-PHRS. (P.A.)

  14. Key structural features of nonsteroidal ligands for binding and activation of the androgen receptor.

    Science.gov (United States)

    Yin, Donghua; He, Yali; Perera, Minoli A; Hong, Seoung Soo; Marhefka, Craig; Stourman, Nina; Kirkovsky, Leonid; Miller, Duane D; Dalton, James T

    2003-01-01

    The purposes of the present studies were to examine the androgen receptor (AR) binding ability and in vitro functional activity of multiple series of nonsteroidal compounds derived from known antiandrogen pharmacophores and to investigate the structure-activity relationships (SARs) of these nonsteroidal compounds. The AR binding properties of sixty-five nonsteroidal compounds were assessed by a radioligand competitive binding assay with the use of cytosolic AR prepared from rat prostates. The AR agonist and antagonist activities of high-affinity ligands were determined by the ability of the ligand to regulate AR-mediated transcriptional activation in cultured CV-1 cells, using a cotransfection assay. Nonsteroidal compounds with diverse structural features demonstrated a wide range of binding affinity for the AR. Ten compounds, mainly from the bicalutamide-related series, showed a binding affinity superior to the structural pharmacophore from which they were derived. Several SARs regarding nonsteroidal AR binding were revealed from the binding data, including stereoisomeric conformation, steric effect, and electronic effect. The functional activity of high-affinity ligands ranged from antagonist to full agonist for the AR. Several structural features were found to be determinative of agonist and antagonist activities. The nonsteroidal AR agonists identified from the present studies provided a pool of candidates for further development of selective androgen receptor modulators (SARMs) for androgen therapy. Also, these studies uncovered or confirmed numerous important SARs governing AR binding and functional properties by nonsteroidal molecules, which would be valuable in the future structural optimization of SARMs.

  15. Neuroticism in Young Women with Fibromyalgia Links to Key Clinical Features

    Directory of Open Access Journals (Sweden)

    Katrina Malin

    2012-01-01

    Full Text Available Objective. We examined personality traits in young women with FM, in order to seek associations with key psychological processes and clinical symptoms. Methods. Twenty-seven women with FM and 29 age-matched female healthy controls [HC] completed a series of questionnaires examining FM symptoms, personality and psychological variables. Results. Significant differences between characteristic FM symptoms (sleep, pain, fatigue, and confusion as well as for the psychological variables of depression, anxiety, and stress were found between FM and HC (P<0.001. Neuroticism was the only subscale of the Big Five Inventory that showed a significant difference between the FM group and HC group [P<0.05]. Within the FM group, there was a significant association between the level of the neuroticism and each of pain, sleep, fatigue, and confusion, depression, anxiety, and stress (P<0.05–0.01. The association between the level of neuroticism and the level of stress was the strongest of all variables tested (P<0.001. Conclusion. The personality trait of neuroticism significantly associates with the key FM characteristics of pain, sleep, fatigue and confusion as well as the common co-morbidities of depression, anxiety and stress. Personality appears to be an important modulator of FM clinical symptoms.

  16. Key processes and input parameters for environmental tritium models

    International Nuclear Information System (INIS)

    Bunnenberg, C.; Taschner, M.; Ogram, G.L.

    1994-01-01

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs

  17. Key processes and input parameters for environmental tritium models

    Energy Technology Data Exchange (ETDEWEB)

    Bunnenberg, C; Taschner, M [Niedersaechsisches Inst. fuer Radiooekologie, Hannover (Germany); Ogram, G L [Ontario Hydro, Toronto, ON (Canada)

    1994-12-31

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs.

  18. Key design features of a new smokefree law to help achieve the Smokefree Aotearoa.

    Science.gov (United States)

    Delany, Louise; Thomson, George; Wilson, Nick; Edwards, Richard

    2016-08-05

    To design new tobacco control legislation to achieve the New Zealand Government's 2025 smokefree goal. An original analysis of the legislative options for New Zealand tobacco control. 'Business as usual' is most unlikely to achieve smoking prevalence that is less than 5% by 2025. Key components of a new Act would ideally include plans and targets with teeth, a focus on the industry, a focus on the product, reduction of supply, and a whole-of-society approach to promote consistency in policy implementation through: i) a public duty on government agencies to act consistently with smokefree law; ii) a general duty on those associated with the tobacco/nicotine industry in relation to tobacco control objectives; and iii) a principle requiring international treaties to be interpreted consistently with tobacco control objectives. Strategies such as those identified in this Viewpoint should be explored further as part of urgently needed planning to achieve the New Zealand Government's goal for Smokefree Aotearoa by 2025.

  19. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  20. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... person from another using only subsets of features will both decrease the computational cost and increase the generalization capacity of the face recognition algorithm. Moreover, identifying which are the features that better discriminate between persons will also provide a deeper understanding...... of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...

  1. Charge Segregation and Low Hydrophobicity Are Key Features of Ribosomal Proteins from Different Organisms*

    Science.gov (United States)

    Fedyukina, Daria V.; Jennaro, Theodore S.; Cavagnero, Silvia

    2014-01-01

    Ribosomes are large and highly charged macromolecular complexes consisting of RNA and proteins. Here, we address the electrostatic and nonpolar properties of ribosomal proteins that are important for ribosome assembly and interaction with other cellular components and may influence protein folding on the ribosome. We examined 50 S ribosomal subunits from 10 species and found a clear distinction between the net charge of ribosomal proteins from halophilic and non-halophilic organisms. We found that ∼67% ribosomal proteins from halophiles are negatively charged, whereas only up to ∼15% of ribosomal proteins from non-halophiles share this property. Conversely, hydrophobicity tends to be lower for ribosomal proteins from halophiles than for the corresponding proteins from non-halophiles. Importantly, the surface electrostatic potential of ribosomal proteins from all organisms, especially halophiles, has distinct positive and negative regions across all the examined species. Positively and negatively charged residues of ribosomal proteins tend to be clustered in buried and solvent-exposed regions, respectively. Hence, the majority of ribosomal proteins is characterized by a significant degree of intramolecular charge segregation, regardless of the organism of origin. This key property enables the ribosome to accommodate proteins within its complex scaffold regardless of their overall net charge. PMID:24398678

  2. KEY FEATURES OF THE INTRAGRAFT MICROENVIRONMENT THAT DETERMINE LONG-TERM SURVIVAL FOLLOWING TRANSPLANTATION

    Directory of Open Access Journals (Sweden)

    Sarah eBruneau

    2012-04-01

    Full Text Available In this review, we discuss how changes in the intragraft microenvironment serve to promote or sustain the development of chronic allograft rejection. We propose two key elements within the microenvironment that contribute to the rejection process. The first is endothelial cell proliferation and angiogenesis that serve to create abnormal microvascular blood flow patterns as well as local tissue hypoxia, and precedes endothelial-to-mesenchymal transition (EndMT. The second is the overexpression of local cytokines and growth factors that serve to sustain inflammation and, in turn, function to promote a leukocyte-induced angiogenesis reaction. Central to both events is overexpression of vascular endothelial growth factor (VEGF, which is both pro-inflammatory and pro-angiogenic, and thus drives progression of the chronic rejection microenvironment. In our discussion, we focus on how inflammation results in angiogenesis and how leukocyte-induced angiogenesis is pathological. We also discuss how VEGF is a master control factor that fosters the development of the chronic rejection microenvironment. Overall, this review provides insight into the intragraft microenvironment as an important paradigm for future direction in the field.

  3. Key Features of the Intragraft Microenvironment that Determine Long-Term Survival Following Transplantation

    Science.gov (United States)

    Bruneau, Sarah; Woda, Craig Bryan; Daly, Kevin Patrick; Boneschansker, Leonard; Jain, Namrata Gargee; Kochupurakkal, Nora; Contreras, Alan Gabriel; Seto, Tatsuichiro; Briscoe, David Michael

    2012-01-01

    In this review, we discuss how changes in the intragraft microenvironment serve to promote or sustain the development of chronic allograft rejection. We propose two key elements within the microenvironment that contribute to the rejection process. The first is endothelial cell proliferation and angiogenesis that serve to create abnormal microvascular blood flow patterns as well as local tissue hypoxia, and precedes endothelial-to-mesenchymal transition. The second is the overexpression of local cytokines and growth factors that serve to sustain inflammation and, in turn, function to promote a leukocyte-induced angiogenesis reaction. Central to both events is overexpression of vascular endothelial growth factor (VEGF), which is both pro-inflammatory and pro-angiogenic, and thus drives progression of the chronic rejection microenvironment. In our discussion, we focus on how inflammation results in angiogenesis and how leukocyte-induced angiogenesis is pathological. We also discuss how VEGF is a master control factor that fosters the development of the chronic rejection microenvironment. Overall, this review provides insight into the intragraft microenvironment as an important paradigm for future direction in the field. PMID:22566935

  4. Predicting establishment of non-native fishes in Greece: identifying key features

    Directory of Open Access Journals (Sweden)

    Christos Gkenas

    2015-11-01

    Full Text Available Non-native fishes are known to cause economic damage to human society and are considered a major threat to biodiversity loss in freshwater ecosystems. The growing concern about these impacts has driven to an investigation of the biological traits that facilitate the establishment of non-native fish. However, invalid assessment in choosing the appropriate statistical model can lead researchers to ambiguous conclusions. Here, we present a comprehensive comparison of traditional and alternative statistical methods for predicting fish invasions using logistic regression, classification trees, multicorrespondence analysis and random forest analysis to determine characteristics of successful and failed non-native fishes in Hellenic Peninsula through establishment. We defined fifteen categorical predictor variables with biological relevance and measures of human interest. Our study showed that accuracy differed according to the model and the number of factors considered. Among all the models tested, random forest and logistic regression performed best, although all approaches predicted non-native fish establishment with moderate to excellent results. Detailed evaluation among the models corresponded with differences in variables importance, with three biological variables (parental care, distance from nearest native source and maximum size and two variables of human interest (prior invasion success and propagule pressure being important in predicting establishment. The analyzed statistical methods presented have a high predictive power and can be used as a risk assessment tool to prevent future freshwater fish invasions in this region with an imperiled fish fauna.

  5. A positive deviance approach to understanding key features to improving diabetes care in the medical home

    NARCIS (Netherlands)

    Gabbay, R.A.; Friedberg, M.W.; Miller-Day, M.; Cronholm, P.F.; Adelman, A.; Schneider, E.C.

    2013-01-01

    PURPOSE The medical home has gained national attention as a model to reorganize primary care to improve health outcomes. Pennsylvania has undertaken one of the largest state-based, multipayer medical home pilot projects. We used a positive deviance approach to identify and compare factors driving

  6. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux...

  7. Origin and spread of photosynthesis based upon conserved sequence features in key bacteriochlorophyll biosynthesis proteins.

    Science.gov (United States)

    Gupta, Radhey S

    2012-11-01

    The origin of photosynthesis and how this capability has spread to other bacterial phyla remain important unresolved questions. I describe here a number of conserved signature indels (CSIs) in key proteins involved in bacteriochlorophyll (Bchl) biosynthesis that provide important insights in these regards. The proteins BchL and BchX, which are essential for Bchl biosynthesis, are derived by gene duplication in a common ancestor of all phototrophs. More ancient gene duplication gave rise to the BchX-BchL proteins and the NifH protein of the nitrogenase complex. The sequence alignment of NifH-BchX-BchL proteins contain two CSIs that are uniquely shared by all NifH and BchX homologs, but not by any BchL homologs. These CSIs and phylogenetic analysis of NifH-BchX-BchL protein sequences strongly suggest that the BchX homologs are ancestral to BchL and that the Bchl-based anoxygenic photosynthesis originated prior to the chlorophyll (Chl)-based photosynthesis in cyanobacteria. Another CSI in the BchX-BchL sequence alignment that is uniquely shared by all BchX homologs and the BchL sequences from Heliobacteriaceae, but absent in all other BchL homologs, suggests that the BchL homologs from Heliobacteriaceae are primitive in comparison to all other photosynthetic lineages. Several other identified CSIs in the BchN homologs are commonly shared by all proteobacterial homologs and a clade consisting of the marine unicellular Cyanobacteria (Clade C). These CSIs in conjunction with the results of phylogenetic analyses and pair-wise sequence similarity on the BchL, BchN, and BchB proteins, where the homologs from Clade C Cyanobacteria and Proteobacteria exhibited close relationship, provide strong evidence that these two groups have incurred lateral gene transfers. Additionally, phylogenetic analyses and several CSIs in the BchL-N-B proteins that are uniquely shared by all Chlorobi and Chloroflexi homologs provide evidence that the genes for these proteins have also been

  8. Key Questions in Building Defect Prediction Models in Practice

    Science.gov (United States)

    Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas

    The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.

  9. Inverse Bayesian inference as a key of consciousness featuring a macroscopic quantum logical structure.

    Science.gov (United States)

    Gunji, Yukio-Pegio; Shinohara, Shuji; Haruna, Taichi; Basios, Vasileios

    2017-02-01

    To overcome the dualism between mind and matter and to implement consciousness in science, a physical entity has to be embedded with a measurement process. Although quantum mechanics have been regarded as a candidate for implementing consciousness, nature at its macroscopic level is inconsistent with quantum mechanics. We propose a measurement-oriented inference system comprising Bayesian and inverse Bayesian inferences. While Bayesian inference contracts probability space, the newly defined inverse one relaxes the space. These two inferences allow an agent to make a decision corresponding to an immediate change in their environment. They generate a particular pattern of joint probability for data and hypotheses, comprising multiple diagonal and noisy matrices. This is expressed as a nondistributive orthomodular lattice equivalent to quantum logic. We also show that an orthomodular lattice can reveal information generated by inverse syllogism as well as the solutions to the frame and symbol-grounding problems. Our model is the first to connect macroscopic cognitive processes with the mathematical structure of quantum mechanics with no additional assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  11. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  12. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  13. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    ’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...

  14. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  15. Key features of human episodic recollection in the cross-episode retrieval of rat hippocampus representations of space.

    Directory of Open Access Journals (Sweden)

    Eduard Kelemen

    2013-07-01

    Full Text Available Neurophysiological studies focus on memory retrieval as a reproduction of what was experienced and have established that neural discharge is replayed to express memory. However, cognitive psychology has established that recollection is not a verbatim replay of stored information. Recollection is constructive, the product of memory retrieval cues, the information stored in memory, and the subject's state of mind. We discovered key features of constructive recollection embedded in the rat CA1 ensemble discharge during an active avoidance task. Rats learned two task variants, one with the arena stable, the other with it rotating; each variant defined a distinct behavioral episode. During the rotating episode, the ensemble discharge of CA1 principal neurons was dynamically organized to concurrently represent space in two distinct codes. The code for spatial reference frame switched rapidly between representing the rat's current location in either the stationary spatial frame of the room or the rotating frame of the arena. The code for task variant switched less frequently between a representation of the current rotating episode and the stable episode from the rat's past. The characteristics and interplay of these two hippocampal codes revealed three key properties of constructive recollection. (1 Although the ensemble representations of the stable and rotating episodes were distinct, ensemble discharge during rotation occasionally resembled the stable condition, demonstrating cross-episode retrieval of the representation of the remote, stable episode. (2 This cross-episode retrieval at the level of the code for task variant was more likely when the rotating arena was about to match its orientation in the stable episode. (3 The likelihood of cross-episode retrieval was influenced by preretrieval information that was signaled at the level of the code for spatial reference frame. Thus key features of episodic recollection manifest in rat hippocampal

  16. Music Genre Classification using the multivariate AR feature integration model

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Meng, Anders

    2005-01-01

    informative decisions about musical genre. For the MIREX music genre contest several authors derive long time features based either on statistical moments and/or temporal structure in the short time features. In our contribution we model a segment (1.2 s) of short time features (texture) using a multivariate...... autoregressive model. Other authors have applied simpler statistical models such as the mean-variance model, which also has been included in several of this years MIREX submissions, see e.g. Tzanetakis (2005); Burred (2005); Bergstra et al. (2005); Lidy and Rauber (2005)....

  17. Doubly sparse factor models for unifying feature transformation and feature selection

    International Nuclear Information System (INIS)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko

    2010-01-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  18. Doubly sparse factor models for unifying feature transformation and feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)

    2010-06-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  19. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  20. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  1. Physical model for the 2175 A interstellar extinction feature

    International Nuclear Information System (INIS)

    Hecht, J.H.

    1986-01-01

    Recent IUE observations have shown that the 2175 A interstellar extinction feature is constant in wavelength but varies in width. A model has been constructed to explain these results. It is proposed that the 2175 A feature will only be seen when there is extinction due to carbon grains which have lost their hydrogen. In particular, the feature is caused by a separate population of small (less than 50 A radius), hydrogen-free carbon grains. The variations in width would be due to differences in either their temperature, size distribution, or impurity content. All other carbon grains retain hydrogen, which causes the feature to be suppressed. If this model is correct, then it implies that the grains responsible for the unidentified IR emission features would not generally cause the 2175 A feature. 53 references

  2. Feature inference with uncertain categorization: Re-assessing Anderson's rational model.

    Science.gov (United States)

    Konovalova, Elizaveta; Le Mens, Gaël

    2017-09-18

    A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.

  3. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  4. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  5. A keyword spotting model using perceptually significant energy features

    Science.gov (United States)

    Umakanthan, Padmalochini

    The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.

  6. Discrete-Feature Model Implementation of SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Geier, Joel

    2010-03-01

    A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as

  7. Discrete-Feature Model Implementation of SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2010-03-15

    A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as

  8. Key performance indicators in hospital based on balanced scorecard model

    Directory of Open Access Journals (Sweden)

    Hamed Rahimi

    2017-01-01

    Full Text Available Introduction: Performance measurement is receiving increasing verification all over the world. Nowadays in a lot of organizations, irrespective of their type or size, performance evaluation is the main concern and a key issue for top administrators. The purpose of this study is to organize suitable key performance indicators (KPIs for hospitals’ performance evaluation based on the balanced scorecard (BSC. Method: This is a mixed method study. In order to identify the hospital’s performance indicators (HPI, first related literature was reviewed and then the experts’ panel and Delphi method were used. In this study, two rounds were needed for the desired level of consensus. The experts rated the importance of the indicators, on a five-point Likert scale. In the consensus calculation, the consensus percentage was calculated by classifying the values 1-3 as not important (0 and 4-5 to (1 as important. Simple additive weighting technique was used to rank the indicators and select hospital’s KPIs. The data were analyzed by Excel 2010 software. Results: About 218 indicators were obtained from a review of selected literature. Through internal expert panel, 77 indicators were selected. Finally, 22 were selected for KPIs of hospitals. Ten indicators were selected in internal process perspective and 5, 4, and 3 indicators in finance, learning and growth, and customer, respectively. Conclusion: This model can be a useful tool for evaluating and comparing the performance of hospitals. However, this model is flexible and can be adjusted according to differences in the target hospitals. This study can be beneficial for hospital administrators and it can help them to change their perspective about performance evaluation.

  9. Modeling key processes causing climate change and variability

    Energy Technology Data Exchange (ETDEWEB)

    Henriksson, S.

    2013-09-01

    Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

  10. Modeling crash injury severity by road feature to improve safety.

    Science.gov (United States)

    Penmetsa, Praveena; Pulugurtha, Srinivas S

    2018-01-02

    The objective of this research is 2-fold: to (a) model and identify critical road features (or locations) based on crash injury severity and compare it with crash frequency and (b) model and identify drivers who are more likely to contribute to crashes by road feature. Crash data from 2011 to 2013 were obtained from the Highway Safety Information System (HSIS) for the state of North Carolina. Twenty-three different road features were considered, analyzed, and compared with each other as well as no road feature. A multinomial logit (MNL) model was developed and odds ratios were estimated to investigate the effect of road features on crash injury severity. Among the many road features, underpass, end or beginning of a divided highway, and on-ramp terminal on crossroad are the top 3 critical road features. Intersection crashes are frequent but are not highly likely to result in severe injuries compared to critical road features. Roundabouts are least likely to result in both severe and moderate injuries. Female drivers are more likely to be involved in crashes at intersections (4-way and T) compared to male drivers. Adult drivers are more likely to be involved in crashes at underpasses. Older drivers are 1.6 times more likely to be involved in a crash at the end or beginning of a divided highway. The findings from this research help to identify critical road features that need to be given priority. As an example, additional advanced warning signs and providing enlarged or highly retroreflective signs that grab the attention of older drivers may help in making locations such as end or beginning of a divided highway much safer. Educating drivers about the necessary skill sets required at critical road features in addition to engineering solutions may further help them adopt safe driving behaviors on the road.

  11. Key aspects of stratospheric tracer modeling using assimilated winds

    Directory of Open Access Journals (Sweden)

    B. Bregman

    2006-01-01

    Full Text Available This study describes key aspects of global chemistry-transport models and their impact on stratospheric tracer transport. We concentrate on global models that use assimilated winds from numerical weather predictions, but the results also apply to tracer transport in general circulation models. We examined grid resolution, numerical diffusion, air parcel dispersion, the wind or mass flux update frequency, and time interpolation. The evaluation is performed with assimilated meteorology from the "operational analyses or operational data" (OD from the European Centre for Medium-Range Weather Forecasts (ECMWF. We also show the effect of the mass flux update frequency using the ECMWF 40-year re-analyses (ERA40. We applied the three-dimensional chemistry-transport Tracer Model version 5 (TM5 and a trajectory model and performed several diagnoses focusing on different transport regimes. Covering different time and spatial scales, we examined (1 polar vortex dynamics during the Arctic winter, (2 the large-scale stratospheric meridional circulation, and (3 air parcel dispersion in the tropical lower stratosphere. Tracer distributions inside the Arctic polar vortex show considerably worse agreement with observations when the model grid resolution in the polar region is reduced to avoid numerical instability. The results are sensitive to the diffusivity of the advection. Nevertheless, the use of a computational cheaper but diffusive advection scheme is feasible for tracer transport when the horizontal grid resolution is equal or smaller than 1 degree. The use of time interpolated winds improves the tracer distributions, particularly in the middle and upper stratosphere. Considerable improvement is found both in the large-scale tracer distribution and in the polar regions when the update frequency of the assimilated winds is increased from 6 to 3 h. It considerably reduces the vertical dispersion of air parcels in the tropical lower stratosphere. Strong

  12. Towards the maturity model for feature oriented domain analysis

    Directory of Open Access Journals (Sweden)

    Muhammad Javed

    2014-09-01

    Full Text Available Assessing the quality of a model has always been a challenge for researchers in academia and industry. The quality of a feature model is a prime factor because it is used in the development of products. A degraded feature model leads the development of low quality products. Few efforts have been made on improving the quality of feature models. This paper is an effort to present our ongoing work i.e. development of FODA (Feature Oriented Domain Analysis maturity model which will help to evaluate the quality of a given feature model. In this paper, we provide the quality levels along with their descriptions. The proposed model consists of four levels starting from level 0 to level 3. Design of each level is based on the severity of errors, whereas severity of errors decreases from level 0 to level 3. We elaborate each level with the help of examples. We borrowed all examples from the material published by the research community of Software Product Lines (SPL for the application of our framework.

  13. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  14. Spatial uncertainty model for visual features using a Kinect™ sensor.

    Science.gov (United States)

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  15. Hole Feature on Conical Face Recognition for Turning Part Model

    Science.gov (United States)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  16. Key engineering features of the ITER-FEAT magnet system and implications for the R and D programme

    International Nuclear Information System (INIS)

    Huguet, M.

    2001-01-01

    The magnet design of the new ITER-FEAT machine comprises 18 Toroidal Field (TF) coils, a Central Solenoid (CS), 6 Poloidal Field (PF) coils and Correction Coils (CCs). A key driver of this new design is the requirement to generate and control plasmas with a relatively high elongation (k 95 =1.7) and a relatively high triangularity (δ 95 =0.35). This has lead to a design where the CS is vertically segmented and self-standing and the TF coils are wedged along their inboard legs. Another important design driver is to achieve a high operational reliability of the magnets, and this has resulted in several unconventional designs, and in particular, the use of conductors supported in radial plates for the winding pack of the TF coils. A key mechanical issue is the cyclic loading of the TF coil cases due to the out-of-plane loads which result from the interaction of the TF coil current and the poloidal field. These loads are resisted by a combination of shear keys and 'pre-compression' rings able to provide a centripetal preload at assembly. The fatigue life of the CS conductor jacket is another issue as it determines the CS performance in terms of the flux generation. Two jacket materials and designs are under study. Since 1993, the ITER magnet R and D programme has been focussed on the manufacture and testing of a CS and a TF model coil. During its testing, the CS model coil has successfully achieved all its performance targets in DC and AC operations. The manufacture of the TF model coil is complete. The manufacture of segments of the full scale TF coil case is another important and successful part of this programme and is near completion. New R and D effort is now being initiated to cover specific aspects of the ITER-FEAT design. (author)

  17. A Key Factor of the DCF Model Coherency

    Directory of Open Access Journals (Sweden)

    Piotr Adamczyk

    2017-04-01

    Full Text Available Aim/purpose - The aim of this paper is to provide economically justified evidence that the business value calculated by income valuation methods is the same, regardless of the type of cash flow used in the valuation algorithm. Design/methodology/approach - The evidence was arrived at using free cash flow to equity (FCFE, debt (FCFD and firm (FCFF. The article draws attention to the FCFF method's particular popularity in income valuation, based on analysts' practice. It shows an overview of various approaches to determine the capital structure in the formula for WACC, both in practice and theory. Finally, it examines an empirical example with the authors' own derivations and postulates. Findings - The conclusion drawn from the conducted analysis is that the key to the reconciliation process, and thus DCF model coherency, is to apply the appropriate method of capital structure estimation during the calculation of the weighted average cost of capital (WACC. This capital structure will henceforth be referred to as 'income weights'. Research implications/limitations - It should be noted that the obtained compliance of valuation results does not imply that the income valuation becomes an objective way of determining business value. It still remains subjective. Originality/value/contribution - According to the presented approach, the DCF model's subjectivism is limited to the forecasts. The rest is the algorithm which, based on the principles of mathematics, should be used in the same way in every situation.

  18. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  19. BioModels: Content, Features, Functionality, and Use

    Science.gov (United States)

    Juty, N; Ali, R; Glont, M; Keating, S; Rodriguez, N; Swat, MJ; Wimalaratne, SM; Hermjakob, H; Le Novère, N; Laibe, C; Chelliah, V

    2015-01-01

    BioModels is a reference repository hosting mathematical models that describe the dynamic interactions of biological components at various scales. The resource provides access to over 1,200 models described in literature and over 140,000 models automatically generated from pathway resources. Most model components are cross-linked with external resources to facilitate interoperability. A large proportion of models are manually curated to ensure reproducibility of simulation results. This tutorial presents BioModels' content, features, functionality, and usage. PMID:26225232

  20. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  1. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  2. Confirming the key role of Ar+ ion bombardment in the growth feature of nanostructured carbon materials by PECVD.

    Science.gov (United States)

    Liu, Yulin; Lin, Jinghuang; Jia, Henan; Chen, Shulin; Qi, Junlei; Qu, Chaoqun; Cao, Jian; Feng, Jicai; Fei, Weidong

    2017-11-24

    In order to confirm the key role of Ar + ion bombardment in the growth feature of nanostructured carbon materials (NCMs), here we report a novel strategy to create different Ar + ion states in situ in plasma enhanced chemical vapor deposition (PECVD) by separating catalyst film from the substrate. Different bombardment environments on either side of the catalyst film were created simultaneously to achieve multi-layered structural NCMs. Results showed that Ar + ion bombardment is crucial and complex for the growth of NCMs. Firstly, Ar + ion bombardment has both positive and negative effects on carbon nanotubes (CNTs). On one hand, Ar + ions can break up the graphic structure of CNTs and suppress thin CNT nucleation and growth. On the other hand, Ar + ion bombardment can remove redundant carbon layers on the surface of large catalyst particles which is essential for thick CNTs. As a result, the diameter of the CNTs depends on the Ar + ion state. As for vertically oriented few-layer graphene (VFG), Ar + ions are essential and can even convert the CNTs into VFG. Therefore, by combining with the catalyst separation method, specific or multi-layered structural NCMs can be obtained by PECVD only by changing the intensity of Ar + ion bombardment, and these special NCMs are promising in many fields.

  3. Confirming the key role of Ar+ ion bombardment in the growth feature of nanostructured carbon materials by PECVD

    Science.gov (United States)

    Liu, Yulin; Lin, Jinghuang; Jia, Henan; Chen, Shulin; Qi, Junlei; Qu, Chaoqun; Cao, Jian; Feng, Jicai; Fei, Weidong

    2017-11-01

    In order to confirm the key role of Ar+ ion bombardment in the growth feature of nanostructured carbon materials (NCMs), here we report a novel strategy to create different Ar+ ion states in situ in plasma enhanced chemical vapor deposition (PECVD) by separating catalyst film from the substrate. Different bombardment environments on either side of the catalyst film were created simultaneously to achieve multi-layered structural NCMs. Results showed that Ar+ ion bombardment is crucial and complex for the growth of NCMs. Firstly, Ar+ ion bombardment has both positive and negative effects on carbon nanotubes (CNTs). On one hand, Ar+ ions can break up the graphic structure of CNTs and suppress thin CNT nucleation and growth. On the other hand, Ar+ ion bombardment can remove redundant carbon layers on the surface of large catalyst particles which is essential for thick CNTs. As a result, the diameter of the CNTs depends on the Ar+ ion state. As for vertically oriented few-layer graphene (VFG), Ar+ ions are essential and can even convert the CNTs into VFG. Therefore, by combining with the catalyst separation method, specific or multi-layered structural NCMs can be obtained by PECVD only by changing the intensity of Ar+ ion bombardment, and these special NCMs are promising in many fields.

  4. A Feature Fusion Based Forecasting Model for Financial Time Series

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  5. Features of Functioning the Integrated Building Thermal Model

    Directory of Open Access Journals (Sweden)

    Morozov Maxim N.

    2017-01-01

    Full Text Available A model of the building heating system, consisting of energy source, a distributed automatic control system, elements of individual heating unit and heating system is designed. Application Simulink of mathematical package Matlab is selected as a platform for the model. There are the specialized application Simscape libraries in aggregate with a wide range of Matlab mathematical tools allow to apply the “acausal” modeling concept. Implementation the “physical” representation of the object model gave improving the accuracy of the models. Principle of operation and features of the functioning of the thermal model is described. The investigations of building cooling dynamics were carried out.

  6. The effective field theory of inflation models with sharp features

    International Nuclear Information System (INIS)

    Bartolo, Nicola; Cannone, Dario; Matarrese, Sabino

    2013-01-01

    We describe models of single-field inflation with small and sharp step features in the potential (and sound speed) of the inflaton field, in the context of the Effective Field Theory of Inflation. This approach allows us to study the effects of features in the power-spectrum and in the bispectrum of curvature perturbations, from a model-independent point of view, by parametrizing the features directly with modified ''slow-roll'' parameters. We can obtain a self-consistent power-spectrum, together with enhanced non-Gaussianity, which grows with a quantity β that parametrizes the sharpness of the step. With this treatment it is straightforward to generalize and include features in other coefficients of the effective action of the inflaton field fluctuations. Our conclusion in this case is that, excluding extrinsic curvature terms, the only interesting effects at the level of the bispectrum could arise from features in the first slow-roll parameter ε or in the speed of sound c s . Finally, we derive an upper bound on the parameter β from the consistency of the perturbative expansion of the action for inflaton perturbations. This constraint can be used for an estimation of the signal-to-noise ratio, to show that the observable which is most sensitive to features is the power-spectrum. This conclusion would change if we consider the contemporary presence of a feature and a speed of sound c s < 1, as, in such a case, contributions from an oscillating folded configuration can potentially make the bispectrum the leading observable for feature models

  7. Enhanced HMAX model with feedforward feature learning for multiclass categorization

    Directory of Open Access Journals (Sweden)

    Yinlin eLi

    2015-10-01

    Full Text Available In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 milliseconds of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: 1 To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; 2 To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; 3 Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  8. Enhanced HMAX model with feedforward feature learning for multiclass categorization.

    Science.gov (United States)

    Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu

    2015-01-01

    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  9. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  10. Features of CRISPR-Cas Regulation Key to Highly Efficient and Temporally-Specific crRNA Production

    Directory of Open Access Journals (Sweden)

    Andjela Rodic

    2017-11-01

    Full Text Available Bacterial immune systems, such as CRISPR-Cas or restriction-modification (R-M systems, affect bacterial pathogenicity and antibiotic resistance by modulating horizontal gene flow. A model system for CRISPR-Cas regulation, the Type I-E system from Escherichia coli, is silent under standard laboratory conditions and experimentally observing the dynamics of CRISPR-Cas activation is challenging. Two characteristic features of CRISPR-Cas regulation in E. coli are cooperative transcription repression of cas gene and CRISPR array promoters, and fast non-specific degradation of full length CRISPR transcripts (pre-crRNA. In this work, we use computational modeling to understand how these features affect the system expression dynamics. Signaling which leads to CRISPR-Cas activation is currently unknown, so to bypass this step, we here propose a conceptual setup for cas expression activation, where cas genes are put under transcription control typical for a restriction-modification (R-M system and then introduced into a cell. Known transcription regulation of an R-M system is used as a proxy for currently unknown CRISPR-Cas transcription control, as both systems are characterized by high cooperativity, which is likely related to similar dynamical constraints of their function. We find that the two characteristic CRISPR-Cas control features are responsible for its temporally-specific dynamical response, so that the system makes a steep (switch-like transition from OFF to ON state with a time-delay controlled by pre-crRNA degradation rate. We furthermore find that cooperative transcription regulation qualitatively leads to a cross-over to a regime where, at higher pre-crRNA processing rates, crRNA generation approaches the limit of an infinitely abrupt system induction. We propose that these dynamical properties are associated with rapid expression of CRISPR-Cas components and efficient protection of bacterial cells against foreign DNA. In terms of synthetic

  11. Some Key Features and Possible Origin of the Metamorphic Rock-Hosted Gold Mineralization in Buru Island, Indonesia

    Directory of Open Access Journals (Sweden)

    Arifudin Idrus

    2014-07-01

    Full Text Available DOI: 10.17014/ijog.v1i1.172This paper discusses characteristics of some key features of the primary Buru gold deposit as a tool for a better understanding of the deposit genesis. Currently, about 105,000 artisanal and small-scale gold miners (ASGM are operating in two main localities, i.e. Gogorea and Gunung Botak by digging pits/shafts following gold-bearing quartz vein orientation. The gold extraction uses mercury (amalgamation and cyanide processing. The field study identifies two types/generations of quartz veins namely (1 Early quartz veins which are segmented, sigmoidal, dis­continous, and parallel to the foliation of host rock. The quartz vein is lack of sulfides, weak mineralized, crystalline, relatively clear, and maybe poor in gold, and (2 Quartz veins occurred within a ‘mineralized zone’ of about 100 m in width and ~1,000 m in length. The gold mineralization is strongly overprinted by an argillic alteration zone. The mineralization-alteration zone is probably parallel to the mica schist foliation and strongly controlled by N-S or NE-SW-trending structures. The gold-bearing quartz veins are characterized by banded texture particularly colloform following host rock foliation and sulphide banding, brecciated, and rare bladed-like texture. The alteration types consist of propylitic (chlorite, calcite, sericite, argillic, and carbonation represented by graphite banding and carbon flakes. The ore mineralization is characterized by pyrite, native gold, pyrrhotite, and arsenopyrite. Cinnabar, stibnite, chalcopyrite, galena, and sphalerite are rare or maybe absent. In general, sulphide minerals are rare (<3%. Fifteen rock samples were collected in Wamsaid area for geochemical assaying for Au, Ag, As, Sb, Hg, Cu, Pb, and Zn. Eleven of fifteen samples yielded more than 1.00 g/t Au, in which six of them are in excess of 3.00 g/t Au. It can be noted that all high-grade samples are originally or containing limonitic materials, that suggest

  12. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  13. A mouse model of alcoholic liver fibrosis-associated acute kidney injury identifies key molecular pathways

    International Nuclear Information System (INIS)

    Furuya, Shinji; Chappell, Grace A.; Iwata, Yasuhiro; Uehara, Takeki; Kato, Yuki; Kono, Hiroshi; Bataller, Ramon; Rusyn, Ivan

    2016-01-01

    Clinical data strongly indicate that acute kidney injury (AKI) is a critical complication in alcoholic hepatitis, an acute-on-chronic form of liver failure in patients with advanced alcoholic fibrosis. Development of targeted therapies for AKI in this setting is hampered by the lack of an animal model. To enable research into molecular drivers and novel therapies for fibrosis- and alcohol-associated AKI, we aimed to combine carbon tetrachloride (CCl 4 )-induced fibrosis with chronic intra-gastric alcohol feeding. Male C57BL/6J mice were administered a low dose of CCl 4 (0.2 ml/kg 2 × week/6 weeks) followed by alcohol intragastrically (up to 25 g/kg/day for 3 weeks) and with continued CCl 4 . We observed that combined treatment with CCl 4 and alcohol resulted in severe liver injury, more pronounced than using each treatment alone. Importantly, severe kidney injury was evident only in the combined treatment group. This mouse model reproduced distinct pathological features consistent with AKI in human alcoholic hepatitis. Transcriptomic analysis of kidneys revealed profound effects in the combined treatment group, with enrichment for damage-associated pathways, such as apoptosis, inflammation, immune-response and hypoxia. Interestingly, Havcr1 and Lcn2, biomarkers of AKI, were markedly up-regulated. Overall, this study established a novel mouse model of fibrosis- and alcohol-associated AKI and identified key mechanistic pathways. - Highlights: • Acute kidney injury (AKI) is a critical complication in alcoholic hepatitis • We developed a novel mouse model of fibrosis- and alcohol-associated AKI • This model reproduces key molecular and pathological features of human AKI • This animal model can help identify new targeted therapies for alcoholic hepatitis

  14. A mouse model of alcoholic liver fibrosis-associated acute kidney injury identifies key molecular pathways

    Energy Technology Data Exchange (ETDEWEB)

    Furuya, Shinji; Chappell, Grace A.; Iwata, Yasuhiro [Department of Veterinary Integrative Biosciences, Texas A& M University, College Station, TX (United States); Uehara, Takeki; Kato, Yuki [Laboratory of Veterinary Pathology, Osaka Prefecture University, Osaka (Japan); Kono, Hiroshi [First Department of Surgery, University of Yamanashi, Yamanashi (Japan); Bataller, Ramon [Division of Gastroenterology & Hepatology, Department of Medicine, University of North Carolina, Chapel Hill, NC (United States); Rusyn, Ivan, E-mail: irusyn@tamu.edu [Department of Veterinary Integrative Biosciences, Texas A& M University, College Station, TX (United States)

    2016-11-01

    Clinical data strongly indicate that acute kidney injury (AKI) is a critical complication in alcoholic hepatitis, an acute-on-chronic form of liver failure in patients with advanced alcoholic fibrosis. Development of targeted therapies for AKI in this setting is hampered by the lack of an animal model. To enable research into molecular drivers and novel therapies for fibrosis- and alcohol-associated AKI, we aimed to combine carbon tetrachloride (CCl{sub 4})-induced fibrosis with chronic intra-gastric alcohol feeding. Male C57BL/6J mice were administered a low dose of CCl{sub 4} (0.2 ml/kg 2 × week/6 weeks) followed by alcohol intragastrically (up to 25 g/kg/day for 3 weeks) and with continued CCl{sub 4}. We observed that combined treatment with CCl{sub 4} and alcohol resulted in severe liver injury, more pronounced than using each treatment alone. Importantly, severe kidney injury was evident only in the combined treatment group. This mouse model reproduced distinct pathological features consistent with AKI in human alcoholic hepatitis. Transcriptomic analysis of kidneys revealed profound effects in the combined treatment group, with enrichment for damage-associated pathways, such as apoptosis, inflammation, immune-response and hypoxia. Interestingly, Havcr1 and Lcn2, biomarkers of AKI, were markedly up-regulated. Overall, this study established a novel mouse model of fibrosis- and alcohol-associated AKI and identified key mechanistic pathways. - Highlights: • Acute kidney injury (AKI) is a critical complication in alcoholic hepatitis • We developed a novel mouse model of fibrosis- and alcohol-associated AKI • This model reproduces key molecular and pathological features of human AKI • This animal model can help identify new targeted therapies for alcoholic hepatitis.

  15. Advanced social features in a recommendation system for process modeling

    NARCIS (Netherlands)

    Koschmider, A.; Song, M.S.; Reijers, H.A.; Abramowicz, W.

    2009-01-01

    Social software is known to stimulate the exchange and sharing of information among peers. This paper describes how an existing system that supports process builders in completing a business process can be enhanced with various social features. In that way, it is easier for process modeler to become

  16. Key Issues for Seamless Integrated Chemistry–Meteorology Modeling

    Science.gov (United States)

    Online coupled meteorology–atmospheric chemistry models have greatly evolved in recent years. Although mainly developed by the air quality modeling community, these integrated models are also of interest for numerical weather prediction and climate modeling, as they can con...

  17. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  18. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    Directory of Open Access Journals (Sweden)

    Christoph Hartmann

    2015-12-01

    Full Text Available Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN, which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural

  19. Gravity Model for Topological Features on a Cylindrical Manifold

    Directory of Open Access Journals (Sweden)

    Bayak I.

    2008-04-01

    Full Text Available A model aimed at understanding quantum gravity in terms of Birkho’s approach is discussed. The geometry of this model is constructed by using a winding map of Minkowski space into a R3 S1 -cylinder. The basic field of this model is a field of unit vectors defined through the velocity field of a flow wrapping the cylinder. The degeneration of some parts of the flow into circles (topological features results in in- homogeneities and gives rise to a scalar field, analogous to the gravitational field. The geometry and dynamics of this field are briefly discussed. We treat the intersections be- tween the topological features and the observer’s 3-space as matter particles and argue that these entities are likely to possess some quantum properties.

  20. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  1. The changing model of big pharma: impact of key trends.

    Science.gov (United States)

    Gautam, Ajay; Pan, Xiaogang

    2016-03-01

    Recent years have seen exciting breakthroughs in biomedical sciences that are producing truly novel therapeutics for unmet patient needs. However, the pharmaceutical industry is also facing significant barriers in the form of pricing and reimbursement, continued patent expirations and challenging market dynamics. In this article, we have analyzed data from the 1995-2015 period, on key aspects such as revenue distribution, research units, portfolio mix and emerging markets to identify four key trends that help to understand the change in strategic focus, realignment of R&D footprint, the shift from primary care toward specialty drugs and biologics and the growth of emerging markets as major revenue drivers for big pharma. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. A System-Level Throughput Model for Quantum Key Distribution

    Science.gov (United States)

    2015-09-17

    discrete logarithms in a finite field [35]. Arguably the most popular asymmetric encryption scheme is the RSA algorithm, published a year later in...Theory, vol. 22, no. 6, pp. 644-654, 1976. [36] G. Singh and S. Supriya, ’A Study of Encryption Algorithms ( RSA , DES, 3DES and AES) for Information...xv Dictionary QKD = Quantum Key Distribution OTP = One-Time Pad cryptographic algorithm DES = Data Encryption Standard 3DES

  3. The Main Shear Zone in Sør Rondane: A key feature for reconstructing the geodynamic evolution of East Antarctica

    Science.gov (United States)

    Ruppel, Antonia; Läufer, Andreas; Lisker, Frank; Jacobs, Joachim; Elburg, Marlina; Damaske, Detlef; Lucka, Nicole

    2013-04-01

    Structural investigations were carried out along the Main Shear Zone (MSZ) of western Sør Rondane (22°-25°E, 71.5°-72.5°S) to gain new information about the position of the East-/West-Gondwana suture and the ancient plate tectonic configuration during Gondwana amalgamation. The WSW-ENE striking MSZ divides south-western Sør Rondane in a northern amphibolite-facies terrane and a southern tonalite-trondhjemite-granodiorite (TTG) terrane. The structure can be traced over a distance of ca. 100 km and reaches several hundred meters in width. It is characterized by a right-lateral sense of movement and marked by a transpressional and also transtensional regime. Ductilely deformed granitoids (ca. 560 Ma: SHRIMP U-Pb of zircon) and ductile - brittle structures, which evolved in a transitional ductile to brittle regime in an undeformed syenite (ca. 499-459 Ma, Ar-Ar mica), provide a late Proterozoic/ early Paleozoic time limit for the activity of the shear zone (Shiraishi et al., 2008; Shiraishi et al., 1997). Documentation of ductile and brittle deformation allows reconstructing up to eight deformation stages. Cross-cutting relationships of structural features mapped in the field complemented by published kinematic data reveal the following relative age succession: [i] Dn+1 - formation of the main foliation during peak metamorphism, [ii] Dn+2 - isoclinal, intrafolial folding of the main foliation, mostly foliation-parallel mylonitic shear zones (1-2 meter thick), [iii] Dn+3 - formation of tight to closed folds, [iv] Dn+4 - formation of relatively upright, large-scale open folds, [v] Dn+5 - granitoid intrusion (e.g. Vengen granite), [vi] Dn+6 - dextral shearing between amphibolite and TTG terranes, formation of the MSZ, [vii] Dn+7 - intrusion of late- to post-tectonic granitoids, first stage of brittle deformation (late shearing along MSZ), intrusion of post-kinematic mafic dykes, [viii] Dn+8 - second stage of brittle deformation including formation of conjugate fault

  4. Key Challenges and Potential Urban Modelling Opportunities in ...

    African Journals Online (AJOL)

    Chris Wray

    There is a risk within .... Giere (2004) models are generally considered as simple representations of reality ..... morphology, connectivity, bid rent and virtual model room – were developed to ... term integrated planning of education and health.

  5. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  6. Electronic assessment of clinical reasoning in clerkships: A mixed-methods comparison of long-menu key-feature problems with context-rich single best answer questions

    NARCIS (Netherlands)

    Huwendiek, S.; Reichert, F.; Duncker, C.; Leng, B.A. De; Vleuten, C.P.M. van der; Muijtjens, A.M.; Bosse, H.M.; Haag, M.; Hoffmann, G.F.; Tonshoff, B.; Dolmans, D.

    2017-01-01

    BACKGROUND: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students' acceptance, their educational impact, and

  7. Exploring key factors in online shopping with a hybrid model.

    Science.gov (United States)

    Chen, Hsiao-Ming; Wu, Chia-Huei; Tsai, Sang-Bing; Yu, Jian; Wang, Jiangtao; Zheng, Yuxiang

    2016-01-01

    Nowadays, the web increasingly influences retail sales. An in-depth analysis of consumer decision-making in the context of e-business has become an important issue for internet vendors. However, factors affecting e-business are complicated and intertwined. To stimulate online sales, understanding key influential factors and causal relationships among the factors is important. To gain more insights into this issue, this paper introduces a hybrid method, which combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) with the analytic network process, called DANP method, to find out the driving factors that influence the online business mostly. By DEMATEL approach the causal graph showed that "online service" dimension has the highest degree of direct impact on other dimensions; thus, the internet vendor is suggested to made strong efforts on service quality throughout the online shopping process. In addition, the study adopted DANP to measure the importance of key factors, among which "transaction security" proves to be the most important criterion. Hence, transaction security should be treated with top priority to boost the online businesses. From our study with DANP approach, the comprehensive information can be visually detected so that the decision makers can spotlight on the root causes to develop effectual actions.

  8. Key Elements of the Tutorial Support Management Model

    Science.gov (United States)

    Lynch, Grace; Paasuke, Philip

    2011-01-01

    In response to an exponential growth in enrolments the "Tutorial Support Management" (TSM) model has been adopted by Open Universities Australia (OUA) after a two-year project on the provision of online tutor support in first year, online undergraduate units. The essential focus of the TSM model was the development of a systemic approach…

  9. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Science.gov (United States)

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  10. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  11. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  12. Topical video object discovery from key frames by modeling word co-occurrence prior.

    Science.gov (United States)

    Zhao, Gangqiang; Yuan, Junsong; Hua, Gang; Yang, Jiong

    2015-12-01

    A topical video object refers to an object, that is, frequently highlighted in a video. It could be, e.g., the product logo and the leading actor/actress in a TV commercial. We propose a topic model that incorporates a word co-occurrence prior for efficient discovery of topical video objects from a set of key frames. Previous work using topic models, such as latent Dirichelet allocation (LDA), for video object discovery often takes a bag-of-visual-words representation, which ignored important co-occurrence information among the local features. We show that such data driven co-occurrence information from bottom-up can conveniently be incorporated in LDA with a Gaussian Markov prior, which combines top-down probabilistic topic modeling with bottom-up priors in a unified model. Our experiments on challenging videos demonstrate that the proposed approach can discover different types of topical objects despite variations in scale, view-point, color and lighting changes, or even partial occlusions. The efficacy of the co-occurrence prior is clearly demonstrated when compared with topic models without such priors.

  13. EMF 7 model comparisons: key relationships and parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, B.G.

    1983-12-01

    A simplified textbook model of aggregate demand and supply interprets the similarities and differences in the price and income responses of the various EMF 7 models to oil and policy shocks. The simplified model is a marriage of Hicks' classic IS-LM formulation of the Keynesian theory of effective demand with a rudimentary model of aggregate supply, combining a structural Phillips curve for wage determination and a markup theory of price determination. The reduced-form income equation from the fix-price IS-LM model is used to define an aggregate demand (AD) locus in P-Y space, showing alternative pairs of the implicit GNP deflator and real GNP which would simultaneously satisfy the saving-investment identity and the condition for money market equilibrium. An aggregate supply (AS) schedule is derived by a similar reduction of relations between output and labor demand, unemployment and wage inflation, and the wage-price-productivity nexus governing markup pricing. Given a particular econometric model it is possible to derive IS and LM curves algebraically. The resulting locuses would show alternative combinations of interest rate and real income which equilibrate real income identity on the IS side and the demand and supply of money on the LM side. By further substitution the reduced form fix-price income relation could be obtained for direct quantification of the AD locus. The AS schedule is obtainable by algebraic reduction of the structural supply side equations.

  14. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  15. COMPREHENSIVE CHECK MEASUREMENT OF KEY PARAMETERS ON MODEL BELT CONVEYOR

    Directory of Open Access Journals (Sweden)

    Vlastimil MONI

    2013-07-01

    Full Text Available Complex measurements of characteristic parameters realised on a long distance model belt conveyor are described. The main objective was to complete and combine the regular measurements of electric power on drives of belt conveyors operated in Czech opencast mines with measurements of other physical quantities and to gain by this way an image of their mutual relations and relations of quantities derived from them. The paper includes a short description and results of the measurements on an experimental model conveyor with a closed material transport way.

  16. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  17. Accessing key steps of human tumor progression in vivo by using an avian embryo model

    Science.gov (United States)

    Hagedorn, Martin; Javerzat, Sophie; Gilges, Delphine; Meyre, Aurélie; de Lafarge, Benjamin; Eichmann, Anne; Bikfalvi, Andreas

    2005-02-01

    Experimental in vivo tumor models are essential for comprehending the dynamic process of human cancer progression, identifying therapeutic targets, and evaluating antitumor drugs. However, current rodent models are limited by high costs, long experimental duration, variability, restricted accessibility to the tumor, and major ethical concerns. To avoid these shortcomings, we investigated whether tumor growth on the chick chorio-allantoic membrane after human glioblastoma cell grafting would replicate characteristics of the human disease. Avascular tumors consistently formed within 2 days, then progressed through vascular endothelial growth factor receptor 2-dependent angiogenesis, associated with hemorrhage, necrosis, and peritumoral edema. Blocking of vascular endothelial growth factor receptor 2 and platelet-derived growth factor receptor signaling pathways by using small-molecule receptor tyrosine kinase inhibitors abrogated tumor development. Gene regulation during the angiogenic switch was analyzed by oligonucleotide microarrays. Defined sample selection for gene profiling permitted identification of regulated genes whose functions are associated mainly with tumor vascularization and growth. Furthermore, expression of known tumor progression genes identified in the screen (IL-6 and cysteine-rich angiogenic inducer 61) as well as potential regulators (lumican and F-box-only 6) follow similar patterns in patient glioma. The model reliably simulates key features of human glioma growth in a few days and thus could considerably increase the speed and efficacy of research on human tumor progression and preclinical drug screening. angiogenesis | animal model alternatives | glioblastoma

  18. Phase information of time-frequency transforms as a key feature for classification of atrial fibrillation episodes

    International Nuclear Information System (INIS)

    Ortigosa, Nuria; Fernández, Carmen; Galbis, Antonio; Cano, Óscar

    2015-01-01

    Patients suffering from atrial fibrillation can be classified into different subtypes, according to the temporal pattern of the arrhythmia and its recurrence. Nowadays, clinicians cannot differentiate a priori between the different subtypes, and patient classification is done afterwards, when its clinical course is available. In this paper we present a comparison of classification performances when differentiating paroxysmal and persistent atrial fibrillation episodes by means of support vector machines. We analyze short surface electrocardiogram recordings by extracting modulus and phase features from several time-frequency transforms: short-time Fourier transform, Wigner–Ville, Choi–Williams, Stockwell transform, and general Fourier-family transform. Overall, accuracy higher than 81% is obtained when classifying phase information features of real test ECGs from a heterogeneous cohort of patients (in terms of progression of the arrhythmia and antiarrhythmic treatment) recorded in a tertiary center. Therefore, phase features can facilitate the clinicians’ choice of the most appropriate treatment for each patient by means of a non-invasive technique (the surface ECG). (paper)

  19. An Appraisal Model Based on a Synthetic Feature Selection Approach for Students’ Academic Achievement

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2017-11-01

    Full Text Available Obtaining necessary information (and even extracting hidden messages from existing big data, and then transforming them into knowledge, is an important skill. Data mining technology has received increased attention in various fields in recent years because it can be used to find historical patterns and employ machine learning to aid in decision-making. When we find unexpected rules or patterns from the data, they are likely to be of high value. This paper proposes a synthetic feature selection approach (SFSA, which is combined with a support vector machine (SVM to extract patterns and find the key features that influence students’ academic achievement. For verifying the proposed model, two databases, namely, “Student Profile” and “Tutorship Record”, were collected from an elementary school in Taiwan, and were concatenated into an integrated dataset based on students’ names as a research dataset. The results indicate the following: (1 the accuracy of the proposed feature selection approach is better than that of the Minimum-Redundancy-Maximum-Relevance (mRMR approach; (2 the proposed model is better than the listing methods when the six least influential features have been deleted; and (3 the proposed model can enhance the accuracy and facilitate the interpretation of the pattern from a hybrid-type dataset of students’ academic achievement.

  20. Modeling HAZ hardness and weld features with BPN technology

    International Nuclear Information System (INIS)

    Morinishi, S.; Bibby, M.J.; Chan, B.

    2000-01-01

    A BPN (back propagation network) system for predicting HAZ (heat-affected zone) hardnesses and GMAW (gas metal arc) weld features (size and shape) is described in this presentation. Among other things, issues of network structure, training and testing data selection, software efficiency and user interface are discussed. The system is evaluated by comparing network output with experimentally measured test data in the first instance, and with regression methods available for this purpose, thereafter. The potential of the web for exchanging weld process data and for accessing models generated with this system is addressed. In this regard the software has been made available on the Cambridge University 'steel' and 'neural' websites. In addition Java coded software has recently been generated to provide web flexibility and accessibility. Over and above this, the possibility of offering an on-line 'server' training service, arranged to capture user data (user identification, measured welding parameters and features) and trained models for the use of the entire welding community is described. While the possibility of such an exchange is attractive, there are several difficulties in designing such a system. Server software design, computing resources, data base and communications considerations are some of the issues that must be addressed with regard to a server centered training and database system before it becomes reality. (author)

  1. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  2. Modelling energy demand of developing countries: Are the specific features adequately captured?

    International Nuclear Information System (INIS)

    Bhattacharyya, Subhes C.; Timilsina, Govinda R.

    2010-01-01

    This paper critically reviews existing energy demand forecasting methodologies highlighting the methodological diversities and developments over the past four decades in order to investigate whether the existing energy demand models are appropriate for capturing the specific features of developing countries. The study finds that two types of approaches, econometric and end-use accounting, are commonly used in the existing energy demand models. Although energy demand models have greatly evolved since the early seventies, key issues such as the poor-rich and urban-rural divides, traditional energy resources and differentiation between commercial and non-commercial energy commodities are often poorly reflected in these models. While the end-use energy accounting models with detailed sectoral representations produce more realistic projections as compared to the econometric models, they still suffer from huge data deficiencies especially in developing countries. Development and maintenance of more detailed energy databases, further development of models to better reflect developing country context and institutionalizing the modelling capacity in developing countries are the key requirements for energy demand modelling to deliver richer and more reliable input to policy formulation in developing countries.

  3. Modelling energy demand of developing countries: Are the specific features adequately captured?

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, Subhes C. [CEPMLP, University of Dundee, Dundee DD1 4HN (United Kingdom); Timilsina, Govinda R. [Development Research Group, The World Bank, Washington DC (United States)

    2010-04-15

    This paper critically reviews existing energy demand forecasting methodologies highlighting the methodological diversities and developments over the past four decades in order to investigate whether the existing energy demand models are appropriate for capturing the specific features of developing countries. The study finds that two types of approaches, econometric and end-use accounting, are commonly used in the existing energy demand models. Although energy demand models have greatly evolved since the early seventies, key issues such as the poor-rich and urban-rural divides, traditional energy resources and differentiation between commercial and non-commercial energy commodities are often poorly reflected in these models. While the end-use energy accounting models with detailed sectoral representations produce more realistic projections as compared to the econometric models, they still suffer from huge data deficiencies especially in developing countries. Development and maintenance of more detailed energy databases, further development of models to better reflect developing country context and institutionalizing the modelling capacity in developing countries are the key requirements for energy demand modelling to deliver richer and more reliable input to policy formulation in developing countries. (author)

  4. Thermal enhancement cartridge heater modified tritium hydride bed development, Part 2 - Experimental validation of key conceptual design features

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, K.J.; Morgan, G.A. [Savannah River Laboratory, Aiken, SC (United States)

    2015-03-15

    The Thermal Enhancement Cartridge Heater Modified (TECH Mod) tritium hydride bed is an interim replacement for the first generation (Gen1) process hydride beds currently in service in the Savannah River Site (SRS) Tritium Facilities. 3 new features are implemented in the TECH Mod hydride bed prototype: internal electric cartridge heaters, porous divider plates, and copper foam discs. These modifications will enhance bed performance and reduce costs by improving bed activation and installation processes, in-bed accountability measurements, end-of-life bed removal, and He-3 recovery. A full-scale hydride bed test station was constructed at the Savannah River National Laboratory (SRNL) in order to evaluate the performance of the prototype TECH Mod hydride bed. Controlled hydrogen (H{sub 2}) absorption/ desorption experiments were conducted to validate that the conceptual design changes have no adverse effects on the gas transfer kinetics or H{sub 2} storage/release properties compared to those of the Gen1 bed. Inert gas expansions before, during, and after H{sub 2} flow tests were used to monitor changes in gas transfer rates with repeated hydriding/de-hydriding of the hydride material. The gas flow rates significantly decreased after initial hydriding of the material; however, minimal changes were observed after repeated cycling. The data presented herein confirm that the TECH Mod hydride bed would be a suitable replacement for the Gen1 bed with the added enhancements expected from the advanced design features. (authors)

  5. Defining key features of the broad autism phenotype: a comparison across parents of multiple- and single-incidence autism families.

    Science.gov (United States)

    Losh, Molly; Childress, Debra; Lam, Kristen; Piven, Joseph

    2008-06-05

    This study examined the frequency of personality, language, and social-behavioral characteristics believed to comprise the broad autism phenotype (BAP), across families differing in genetic liability to autism. We hypothesized that within this unique sample comprised of multiple-incidence autism families (MIAF), single-incidence autism families (SIAF), and control Down syndrome families (DWNS), a graded expression would be observed for the principal characteristics conferring genetic susceptibility to autism, in which such features would express most profoundly among parents from MIAFs, less strongly among SIAFs, and least of all among comparison parents from DWNS families, who should display population base rates. Analyses detected linear expression of traits in line with hypotheses, and further suggested differential intrafamilial expression across family types. In the vast majority of MIAFs both parents displayed BAP characteristics, whereas within SIAFs, it was equally likely that one, both, or neither parent show BAP features. The significance of these findings is discussed in relation to etiologic mechanisms in autism and relevance to molecular genetic studies. (c) 2007 Wiley-Liss, Inc.

  6. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian11 In this study, English writers were Australians whom had learnt to write in New South Wales (NSW. writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  7. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  8. Key Elements of the User-Friendly, GFDL SKYHI General Circulation Model

    Directory of Open Access Journals (Sweden)

    Richard S. Hemler

    2000-01-01

    Full Text Available Over the past seven years, the portability of the GFDL SKYHI general circulation model has greatly increased. Modifications to the source code have allowed SKYHI to be run on the GFDL Cray Research PVP machines, the TMC CM-5 machine at Los Alamos National Laboratory, and more recently on the GFDL 40-processor Cray Research T3E system. At the same time, changes have been made to the model to make it more usable and flexible. Because of the reduction of the human resources available to manage and analyze scientific experiments, it is no longer acceptable to consider only the optimization of computer resources when producing a research code; one must also consider the availability and cost of the people necessary to maintain, modify and use the model as an investigative tool, and include these factors in defining the form of the model code. The new SKYHI model attempts to strike a balance between the optimization of the use of machine resources (CPU time, memory, disc and the optimal use of human resources (ability to understand code, ability to modify code, ability to perturb code to do experiments, ability to run code on different platforms. Two of the key features that make the new SKYHI code more usable and flexible are the archiving package and the user variable block. The archiving package is used to manage the writing of all archive files, which contain data for later analysis. The model-supplied user variable block allows the easy inclusion of any new variables needed for particular experiments.

  9. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  10. Functional validation of candidate genes detected by genomic feature models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Østergaard, Solveig; Kristensen, Torsten Nygaard

    2018-01-01

    to investigate locomotor activity, and applied genomic feature prediction models to identify gene ontology (GO) cate- gories predictive of this phenotype. Next, we applied the covariance association test to partition the genomic variance of the predictive GO terms to the genes within these terms. We...... then functionally assessed whether the identified candidate genes affected locomotor activity by reducing gene expression using RNA interference. In five of the seven candidate genes tested, reduced gene expression altered the phenotype. The ranking of genes within the predictive GO term was highly correlated......Understanding the genetic underpinnings of complex traits requires knowledge of the genetic variants that contribute to phenotypic variability. Reliable statistical approaches are needed to obtain such knowledge. In genome-wide association studies, variants are tested for association with trait...

  11. Predicting Spatial Distribution of Key Honeybee Pests in Kenya Using Remotely Sensed and Bioclimatic Variables: Key Honeybee Pests Distribution Models

    Directory of Open Access Journals (Sweden)

    David M. Makori

    2017-02-01

    Full Text Available Bee keeping is indispensable to global food production. It is an alternate income source, especially in rural underdeveloped African settlements, and an important forest conservation incentive. However, dwindling honeybee colonies around the world are attributed to pests and diseases whose spatial distribution and influences are not well established. In this study, we used remotely sensed data to improve the reliability of pest ecological niche (EN models to attain reliable pest distribution maps. Occurrence data on four pests (Aethina tumida, Galleria mellonella, Oplostomus haroldi and Varroa destructor were collected from apiaries within four main agro-ecological regions responsible for over 80% of Kenya’s bee keeping. Africlim bioclimatic and derived normalized difference vegetation index (NDVI variables were used to model their ecological niches using Maximum Entropy (MaxEnt. Combined precipitation variables had a high positive logit influence on all remotely sensed and biotic models’ performance. Remotely sensed vegetation variables had a substantial effect on the model, contributing up to 40.8% for G. mellonella and regions with high rainfall seasonality were predicted to be high-risk areas. Projections (to 2055 indicated that, with the current climate change trend, these regions will experience increased honeybee pest risk. We conclude that honeybee pests could be modelled using bioclimatic data and remotely sensed variables in MaxEnt. Although the bioclimatic data were most relevant in all model results, incorporating vegetation seasonality variables to improve mapping the ‘actual’ habitat of key honeybee pests and to identify risk and containment zones needs to be further investigated.

  12. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  13. Chromatin extrusion explains key features of loop and domain formation in wild-type and engineered genomes

    Science.gov (United States)

    Sanborn, Adrian; Rao, Suhas; Huang, Su-Chen; Durand, Neva; Huntley, Miriam; Jewett, Andrew; Bochkov, Ivan; Chinnappan, Dharmaraj; Cutkosky, Ashok; Li, Jian; Geeting, Kristopher; McKenna, Doug; Stamenova, Elena; Gnirke, Andreas; Melnikov, Alexandre; Lander, Eric; Aiden, Erez

    Our recent kilobase-resolution genome-wide maps of DNA self-contacts demonstrated that mammalian genomes are organized into domains and loops demarcated by the DNA-binding protein CTCF. Here, we combine these maps with new Hi-C, microscopy, and genome-editing experiments to study the physical structure of chromatin fibers, domains, and loops. We find that domains are inconsistent with equilibrium and fractal models. Instead, we use physical simulations to study two models of genome folding. In one, intermonomer attraction during condensation leads to formation of an anisotropic ``tension globule.'' In the other, CTCF and cohesin act together to extrude unknotted loops. Both models are consistent with the observed domains and loops. However, the extrusion model explains a far wider array of observations, such as why the CTCF-binding motifs at pairs of loop anchors lie in the convergent orientation. Finally, we perform 13 genome-editing experiments examining the effect of altering CTCF-binding sites on chromatin folding. The extrusion model predicts in silico the experimental maps using only CTCF-binding sites. Thus, we show that it is possible to disrupt, restore, and move loops and domains using targeted mutations as small as a single base pair.

  14. Keys to the House: Unlocking Residential Savings With Program Models for Home Energy Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Grevatt, Jim [Energy Futures Group (United States); Hoffman, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hoffmeyer, Dale [US Department of Energy, Washington, DC (United States)

    2017-07-05

    After more than 40 years of effort, energy efficiency program administrators and associated contractors still find it challenging to penetrate the home retrofit market, especially at levels commensurate with state and federal goals for energy savings and emissions reductions. Residential retrofit programs further have not coalesced around a reliably successful model. They still vary in design, implementation and performance, and they remain among the more difficult and costly options for acquiring savings in the residential sector. If programs are to contribute fully to meeting resource and policy objectives, administrators need to understand what program elements are key to acquiring residential savings as cost effectively as possible. To that end, the U.S. Department of Energy (DOE) sponsored a comprehensive review and analysis of home energy upgrade programs with proven track records, focusing on those with robustly verified savings and constituting good examples for replication. The study team reviewed evaluations for the period 2010 to 2014 for 134 programs that are funded by customers of investor-owned utilities. All are programs that promote multi-measure retrofits or major system upgrades. We paid particular attention to useful design and implementation features, costs, and savings for nearly 30 programs with rigorous evaluations of performance. This meta-analysis describes program models and implementation strategies for (1) direct install retrofits; (2) heating, ventilating and air-conditioning (HVAC) replacement and early retirement; and (3) comprehensive, whole-home retrofits. We analyze costs and impacts of these program models, in terms of both energy savings and emissions avoided. These program models can be useful guides as states consider expanding their strategies for acquiring energy savings as a resource and for emissions reductions. We also discuss the challenges of using evaluations to create program models that can be confidently applied in

  15. Low cost, small form factor, and integration as the key features for the optical component industry takeoff

    Science.gov (United States)

    Schiattone, Francesco; Bonino, Stefano; Gobbi, Luigi; Groppi, Angelamaria; Marazzi, Marco; Musio, Maurizio

    2003-04-01

    In the past the optical component market has been mainly driven by performances. Today, as the number of competitors has drastically increased, the system integrators have a wide range of possible suppliers and solutions giving them the possibility to be more focused on cost and also on footprint reduction. So, if performances are still essential, low cost and Small Form Factor issues are becoming more and more crucial in selecting components. Another evolution in the market is the current request of the optical system companies to simplify the supply chain in order to reduce the assembling and testing steps at system level. This corresponds to a growing demand in providing subassemblies, modules or hybrid integrated components: that means also Integration will be an issue in which all the optical component companies will compete to gain market shares. As we can see looking several examples offered by electronic market, to combine low cost and SFF is a very challenging task but Integration can help in achieving both features. In this work we present how these issues could be approached giving examples of some advanced solutions applied to LiNbO3 modulators. In particular we describe the progress made on automation, new materials and low cost fabrication methods for the parts. We also introduce an approach in integrating optical and electrical functionality on LiNbO3 modulators including RF driver, bias control loop, attenuator and photodiode integrated in a single device.

  16. A review of the surface features and properties, surfactant adsorption and floatability of four key minerals of diasporic bauxite resources.

    Science.gov (United States)

    Zhang, Ningning; Nguyen, Anh V; Zhou, Changchun

    2018-04-01

    Diasporic bauxite represents one of the major aluminum resources. Its upgrading for further processing involves a separation of diaspore (the valuable mineral) from aluminosilicates (the gangue minerals) such as kaolinite, illite, and pyrophyllite. Flotation is one of the most effective ways to realize the upgrading. Since flotation is a physicochemical process based on the difference in the surface hydrophobicity of different components, determining the adsorption characteristics of various flotation surfactants on the mineral surfaces is critical. The surfactant adsorption properties of the minerals, in turn, are controlled by the surface chemistry of the minerals, while the latter is related to the mineral crystal structures. In this paper, we first discuss the crystal structures of the four key minerals of diaspore, kaolinite, illite, and pyrophyllite as well as the broken bonds on their exposed surfaces after grinding. Next, we summarize the surface chemistry properties such as surface wettability and surface electrical properties of the four minerals, and the differences in these properties are explained from the perspective of mineral crystal structures. Then we review the adsorption mechanism and adsorption characteristics of surfactants such as collectors (cationic, anionic, and mixed surfactants), depressants (inorganic and organic), dispersants, and flocculants on these mineral surfaces. The separation of diaspore and aluminosilicates by direct flotation and reverse flotation are reviewed, and the collecting properties of different types of collectors are compared. Furthermore, the abnormal behavior of the cationic flotation of kaolinite is also explained in this section. This review provides a strong theoretical support for the optimization of the upgrading of diaspore bauxite ore by flotation and the early industrialization of the reverse flotation process. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor

  18. Riparian erosion vulnerability model based on environmental features.

    Science.gov (United States)

    Botero-Acosta, Alejandra; Chu, Maria L; Guzman, Jorge A; Starks, Patrick J; Moriasi, Daniel N

    2017-12-01

    Riparian erosion is one of the major causes of sediment and contaminant load to streams, degradation of riparian wildlife habitats, and land loss hazards. Land and soil management practices are implemented as conservation and restoration measures to mitigate the environmental problems brought about by riparian erosion. This, however, requires the identification of vulnerable areas to soil erosion. Because of the complex interactions between the different mechanisms that govern soil erosion and the inherent uncertainties involved in quantifying these processes, assessing erosion vulnerability at the watershed scale is challenging. The main objective of this study was to develop a methodology to identify areas along the riparian zone that are susceptible to erosion. The methodology was developed by integrating the physically-based watershed model MIKE-SHE, to simulate water movement, and a habitat suitability model, MaxEnt, to quantify the probability of presences of elevation changes (i.e., erosion) across the watershed. The presences of elevation changes were estimated based on two LiDAR-based elevation datasets taken in 2009 and 2012. The changes in elevation were grouped into four categories: low (0.5 - 0.7 m), medium (0.7 - 1.0 m), high (1.0 - 1.7 m) and very high (1.7 - 5.9 m), considering each category as a studied "species". The categories' locations were then used as "species location" map in MaxEnt. The environmental features used as constraints to the presence of erosion were land cover, soil, stream power index, overland flow, lateral inflow, and discharge. The modeling framework was evaluated in the Fort Cobb Reservoir Experimental watershed in southcentral Oklahoma. Results showed that the most vulnerable areas for erosion were located at the upper riparian zones of the Cobb and Lake sub-watersheds. The main waterways of these sub-watersheds were also found to be prone to streambank erosion. Approximatively 80% of the riparian zone (streambank

  19. Key features for more successful place-based sustainability research on social-ecological systems: a Programme on Ecosystem Change and Society (PECS perspective

    Directory of Open Access Journals (Sweden)

    Patricia Balvanera

    2017-03-01

    Full Text Available The emerging discipline of sustainability science is focused explicitly on the dynamic interactions between nature and society and is committed to research that spans multiple scales and can support transitions toward greater sustainability. Because a growing body of place-based social-ecological sustainability research (PBSESR has emerged in recent decades, there is a growing need to understand better how to maximize the effectiveness of this work. The Programme on Ecosystem Change and Society (PECS provides a unique opportunity for synthesizing insights gained from this research community on key features that may contribute to the relative success of PBSESR. We surveyed the leaders of PECS-affiliated projects using a combination of open, closed, and semistructured questions to identify which features of a research project are perceived to contribute to successful research design and implementation. We assessed six types of research features: problem orientation, research team, and contextual, conceptual, methodological, and evaluative features. We examined the desirable and undesirable aspects of each feature, the enabling factors and obstacles associated with project implementation, and asked respondents to assess the performance of their own projects in relation to these features. Responses were obtained from 25 projects working in 42 social-ecological study cases within 25 countries. Factors that contribute to the overall success of PBSESR included: explicitly addressing integrated social-ecological systems; a focus on solution- and transformation-oriented research; adaptation of studies to their local context; trusted, long-term, and frequent engagement with stakeholders and partners; and an early definition of the purpose and scope of research. Factors that hindered the success of PBSESR included: the complexities inherent to social-ecological systems, the imposition of particular epistemologies and methods on the wider research group

  20. Projecting biodiversity and wood production in future forest landscapes: 15 key modeling considerations.

    Science.gov (United States)

    Felton, Adam; Ranius, Thomas; Roberge, Jean-Michel; Öhman, Karin; Lämås, Tomas; Hynynen, Jari; Juutinen, Artti; Mönkkönen, Mikko; Nilsson, Urban; Lundmark, Tomas; Nordin, Annika

    2017-07-15

    A variety of modeling approaches can be used to project the future development of forest systems, and help to assess the implications of different management alternatives for biodiversity and ecosystem services. This diversity of approaches does however present both an opportunity and an obstacle for those trying to decide which modeling technique to apply, and interpreting the management implications of model output. Furthermore, the breadth of issues relevant to addressing key questions related to forest ecology, conservation biology, silviculture, economics, requires insights stemming from a number of distinct scientific disciplines. As forest planners, conservation ecologists, ecological economists and silviculturalists, experienced with modeling trade-offs and synergies between biodiversity and wood biomass production, we identified fifteen key considerations relevant to assessing the pros and cons of alternative modeling approaches. Specifically we identified key considerations linked to study question formulation, modeling forest dynamics, forest processes, study landscapes, spatial and temporal aspects, and the key response metrics - biodiversity and wood biomass production, as well as dealing with trade-offs and uncertainties. We also provide illustrative examples from the modeling literature stemming from the key considerations assessed. We use our findings to reiterate the need for explicitly addressing and conveying the limitations and uncertainties of any modeling approach taken, and the need for interdisciplinary research efforts when addressing the conservation of biodiversity and sustainable use of environmental resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The SSB-positive/SSA-negative antibody profile is not associated with key phenotypic features of Sjögren's syndrome

    DEFF Research Database (Denmark)

    Baer, Alan N; McAdams DeMarco, Mara; Shiboski, Stephen C

    2015-01-01

    phenotypic features. Among SICCA participants classified with SS on the basis of the American-European Consensus Group or American College of Rheumatology criteria, only 2% required the anti-SSB-alone test result to meet these criteria. CONCLUSIONS: The presence of anti-SSB, without anti-SSA antibodies, had...... participants, 2061 (63%) had negative anti-SSA/anti-SSB, 1162 (35%) had anti-SSA with or without anti-SSB, and 74 (2%) anti-SSB alone. Key SS phenotypic features were more prevalent and had measures indicative of greater disease activity in those participants with anti-SSA, either alone or with anti-SSB, than...... in those with anti-SSB alone or negative SSA/SSB serology. These between-group differences were highly significant and not explained by confounding by age, race/ethnicity or gender. Participants with anti-SSB alone were comparable to those with negative SSA/SSB serology in their association with these key...

  2. Replication of surface features from a master model to an amorphous metallic article

    Science.gov (United States)

    Johnson, William L.; Bakke, Eric; Peker, Atakan

    1999-01-01

    The surface features of an article are replicated by preparing a master model having a preselected surface feature thereon which is to be replicated, and replicating the preselected surface feature of the master model. The replication is accomplished by providing a piece of a bulk-solidifying amorphous metallic alloy, contacting the piece of the bulk-solidifying amorphous metallic alloy to the surface of the master model at an elevated replication temperature to transfer a negative copy of the preselected surface feature of the master model to the piece, and separating the piece having the negative copy of the preselected surface feature from the master model.

  3. Application of the Value Optimization Model of Key Factors Based on DSEM

    Directory of Open Access Journals (Sweden)

    Chao Su

    2016-01-01

    Full Text Available The key factors of the damping solvent extraction method (DSEM for the analysis of the unbounded medium are the size of bounded domain, the artificial damping ratio, and the finite element mesh density. To control the simulation accuracy and computational efficiency of the soil-structure interaction, this study establishes a value optimization model of key factors that is composed of the design variables, the objective function, and the constraint function system. Then the optimum solutions of key factors are obtained by the optimization model. According to some comparisons of the results provided by the different initial conditions, the value optimization model of key factors is feasible to govern the simulation accuracy and computational efficiency and to analyze the practical unbounded medium-structure interaction.

  4. Using different classification models in wheat grading utilizing visual features

    Science.gov (United States)

    Basati, Zahra; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-04-01

    Wheat is one of the most important strategic crops in Iran and in the world. The major component that distinguishes wheat from other grains is the gluten section. In Iran, sunn pest is one of the most important factors influencing the characteristics of wheat gluten and in removing it from a balanced state. The existence of bug-damaged grains in wheat will reduce the quality and price of the product. In addition, damaged grains reduce the enrichment of wheat and the quality of bread products. In this study, after preprocessing and segmentation of images, 25 features including 9 colour features, 10 morphological features, and 6 textual statistical features were extracted so as to classify healthy and bug-damaged wheat grains of Azar cultivar of four levels of moisture content (9, 11.5, 14 and 16.5% w.b.) and two lighting colours (yellow light, the composition of yellow and white lights). Using feature selection methods in the WEKA software and the CfsSubsetEval evaluator, 11 features were chosen as inputs of artificial neural network, decision tree and discriment analysis classifiers. The results showed that the decision tree with the J.48 algorithm had the highest classification accuracy of 90.20%. This was followed by artificial neural network classifier with the topology of 11-19-2 and discrimient analysis classifier at 87.46 and 81.81%, respectively

  5. Key features of wave energy.

    Science.gov (United States)

    Rainey, R C T

    2012-01-28

    For a weak point source or dipole, or a small body operating as either, we show that the power from a wave energy converter (WEC) is the product of the particle velocity in the waves, and the wave force (suitably defined). There is a thus a strong analogy with a wind or tidal turbine, where the power is the product of the fluid velocity through the turbine, and the force on it. As a first approximation, the cost of a structure is controlled by the force it has to carry, which governs its strength, and the distance it has to be carried, which governs its size. Thus, WECs are at a disadvantage compared with wind and tidal turbines because the fluid velocities are lower, and hence the forces are higher. On the other hand, the distances involved are lower. As with turbines, the implication is also that a WEC must make the most of its force-carrying ability-ideally, to carry its maximum force all the time, the '100% sweating WEC'. It must be able to limit the wave force on it in larger waves, ultimately becoming near-transparent to them in the survival condition-just like a turbine in extreme conditions, which can stop and feather its blades. A turbine of any force rating can achieve its maximum force in low wind speeds, if its diameter is sufficiently large. This is not possible with a simple monopole or dipole WEC, however, because of the 'nλ/2π' capture width limits. To achieve reasonable 'sweating' in typical wave climates, the force is limited to about 1 MN for a monopole device, or 2 MN for a dipole. The conclusion is that the future of wave energy is in devices that are not simple monopoles or dipoles, but multi-body devices or other shapes equivalent to arrays.

  6. A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Zou Cairong

    2016-01-01

    Full Text Available The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.

  7. Characteristics of evolving models of care for arthritis: A key informant study

    Directory of Open Access Journals (Sweden)

    Veinot Paula

    2008-07-01

    Full Text Available Abstract Background The burden of arthritis is increasing in the face of diminishing health human resources to deliver care. In response, innovative models of care delivery are developing to facilitate access to quality care. Most models have developed in response to local needs with limited evaluation. The primary objective of this study is to a examine the range of models of care that deliver specialist services using a medical/surgical specialist and at least one other health care provider and b document the strengths and challenges of the identified models. A secondary objective is to identify key elements of best practice models of care for arthritis. Methods Semi-structured interviews were conducted with a sample of key informants with expertise in arthritis from jurisdictions with primarily publicly-funded health care systems. Qualitative data were analyzed using a constant comparative approach to identify common types of models of care, strengths and challenges of models, and key components of arthritis care. Results Seventy-four key informants were interviewed from six countries. Five main types of models of care emerged. 1 Specialized arthritis programs deliver comprehensive, multidisciplinary team care for arthritis. Two models were identified using health care providers (e.g. nurses or physiotherapists in expanded clinical roles: 2 triage of patients with musculoskeletal conditions to the appropriate services including specialists; and 3 ongoing management in collaboration with a specialist. Two models promoting rural access were 4 rural consultation support and 5 telemedicine. Key informants described important components of models of care including knowledgeable health professionals and patients. Conclusion A range of models of care for arthritis have been developed. This classification can be used as a framework for discussing care delivery. Areas for development include integration of care across the continuum, including primary

  8. Local and regional energy companies offering energy services: Key activities and implications for the business model

    International Nuclear Information System (INIS)

    Kindström, Daniel; Ottosson, Mikael

    2016-01-01

    Highlights: • Many companies providing energy services are experiencing difficulties. • This research identifies key activities for the provision of energy services. • Findings are aggregated to the business-model level providing managerial insights. • This research identifies two different business model innovation paths. • Energy companies may need to renew parts of, or the entire, business model. - Abstract: Energy services play a key role in increasing energy efficiency in the industry. The key actors in these services are the local and regional energy companies that are increasingly implementing energy services as part of their market offering and developing service portfolios. Although expectations for energy services have been high, progress has so far been limited, and many companies offering energy services, including energy companies, are experiencing difficulties in implementing energy services and providing them to the market. Overall, this research examines what is needed for local and regional energy companies to successfully implement energy services (and consequently provide them to the market). In doing this, a two-stage process is used: first, we identify key activities for the successful implementation of energy services, and second, we aggregate the findings to the business model level. This research demonstrates that to succeed in implementing energy services, an energy company may need to renew parts or all of its existing product-based business model, formulate a new business model, or develop coexisting multiple business models. By discussing two distinct business model innovation processes, this research demonstrates that there can be different paths to success.

  9. Grotoco@SLAM: Second Language Acquisition Modeling with Simple Features, Learners and Task-wise Models

    DEFF Research Database (Denmark)

    Klerke, Sigrid; Martínez Alonso, Héctor; Plank, Barbara

    2018-01-01

    We present our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We focus on evaluating a range of features for the task, including user-derived measures, while examining how far we can get with a simple linear classifier. Our analysis reveals that errors...

  10. Password-only authenticated three-party key exchange with provable security in the standard model.

    Science.gov (United States)

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho

    2014-01-01

    Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  11. Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model

    Directory of Open Access Journals (Sweden)

    Junghyun Nam

    2014-01-01

    Full Text Available Protocols for password-only authenticated key exchange (PAKE in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000, which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.

  12. Operational Details of the Five Domains Model and Its Key Applications to the Assessment and Management of Animal Welfare

    Science.gov (United States)

    Mellor, David J.

    2017-01-01

    Simple Summary The Five Domains Model is a focusing device to facilitate systematic, structured, comprehensive and coherent assessment of animal welfare; it is not a definition of animal welfare, nor is it intended to be an accurate representation of body structure and function. The purpose of each of the five domains is to draw attention to areas that are relevant to both animal welfare assessment and management. This paper begins by briefly describing the major features of the Model and the operational interactions between the five domains, and then it details seven interacting applications of the Model. These underlie its utility and increasing application to welfare assessment and management in diverse animal use sectors. Abstract In accord with contemporary animal welfare science understanding, the Five Domains Model has a significant focus on subjective experiences, known as affects, which collectively contribute to an animal’s overall welfare state. Operationally, the focus of the Model is on the presence or absence of various internal physical/functional states and external circumstances that give rise to welfare-relevant negative and/or positive mental experiences, i.e., affects. The internal states and external circumstances of animals are evaluated systematically by referring to each of the first four domains of the Model, designated “Nutrition”, “Environment”, “Health” and “Behaviour”. Then affects, considered carefully and cautiously to be generated by factors in these domains, are accumulated into the fifth domain, designated “Mental State”. The scientific foundations of this operational procedure, published in detail elsewhere, are described briefly here, and then seven key ways the Model may be applied to the assessment and management of animal welfare are considered. These applications have the following beneficial objectives—they (1) specify key general foci for animal welfare management; (2) highlight the foundations of

  13. Structural conceptual models of water-conducting features at Aespoe

    International Nuclear Information System (INIS)

    Bossart, P.; Mazurek, M.; Hermansson, Jan

    1998-01-01

    Within the framework of the Fracture Classification and Characterization Project (FCC), water conducting features (WCF) in the Aespoe tunnel system and on the surface of Aespoe Island are being characterized over a range of scales. The larger-scale hierarchies of WCF are mostly constituted of fault arrays, i.e. brittle structures that accommodated episodes of shear strain. The smaller-scale WCF (contained within blocks 1 m. Structural evidence indicates that the fractures within the TRUE-1 block constitute an interconnected system with a pronounced anisotropy

  14. Cluster regression model and level fluctuation features of Van Lake, Turkey

    Directory of Open Access Journals (Sweden)

    Z. Şen

    1999-02-01

    Full Text Available Lake water levels change under the influences of natural and/or anthropogenic environmental conditions. Among these influences are the climate change, greenhouse effects and ozone layer depletions which are reflected in the hydrological cycle features over the lake drainage basins. Lake levels are among the most significant hydrological variables that are influenced by different atmospheric and environmental conditions. Consequently, lake level time series in many parts of the world include nonstationarity components such as shifts in the mean value, apparent or hidden periodicities. On the other hand, many lake level modeling techniques have a stationarity assumption. The main purpose of this work is to develop a cluster regression model for dealing with nonstationarity especially in the form of shifting means. The basis of this model is the combination of transition probability and classical regression technique. Both parts of the model are applied to monthly level fluctuations of Lake Van in eastern Turkey. It is observed that the cluster regression procedure does preserve the statistical properties and the transitional probabilities that are indistinguishable from the original data.Key words. Hydrology (hydrologic budget; stochastic processes · Meteorology and atmospheric dynamics (ocean-atmosphere interactions

  15. Cluster regression model and level fluctuation features of Van Lake, Turkey

    Directory of Open Access Journals (Sweden)

    Z. Şen

    Full Text Available Lake water levels change under the influences of natural and/or anthropogenic environmental conditions. Among these influences are the climate change, greenhouse effects and ozone layer depletions which are reflected in the hydrological cycle features over the lake drainage basins. Lake levels are among the most significant hydrological variables that are influenced by different atmospheric and environmental conditions. Consequently, lake level time series in many parts of the world include nonstationarity components such as shifts in the mean value, apparent or hidden periodicities. On the other hand, many lake level modeling techniques have a stationarity assumption. The main purpose of this work is to develop a cluster regression model for dealing with nonstationarity especially in the form of shifting means. The basis of this model is the combination of transition probability and classical regression technique. Both parts of the model are applied to monthly level fluctuations of Lake Van in eastern Turkey. It is observed that the cluster regression procedure does preserve the statistical properties and the transitional probabilities that are indistinguishable from the original data.

    Key words. Hydrology (hydrologic budget; stochastic processes · Meteorology and atmospheric dynamics (ocean-atmosphere interactions

  16. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Science.gov (United States)

    Quek, Francis

    2004-12-01

    The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  17. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Directory of Open Access Journals (Sweden)

    Francis Quek

    2004-09-01

    Full Text Available The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to “whole gesture” recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  18. Features of Balance Model Development of Exclave Region

    Directory of Open Access Journals (Sweden)

    Timur Rustamovich Gareev

    2015-06-01

    Full Text Available In the article, the authors build a balance model for an exclave region. The aim of the work is to explore the unique properties of exclaves to evaluate the possibility of development of a more complex model for the economy of a region. Exclaves are strange phenomena in both theoretical and practical regional economy. There is lack of comparative models, so it is typically quite challenging to study exclaves. At the same time, exclaves produce better statistics, which gives more careful consideration of cross-regional economic flows. The authors discuss methodologies of model-based regional development forecasting. They analyze balance approach on a more general level of regional governance and individually, on the example of specific territories. Thus, they identify and explain the need to develop balance approach models fitted to the special needs of certain territories. By combining regional modeling for an exclave with traditional balance and simulation-based methods and event-based approach, they come up with a more detailed model for the economy of a region. Having taken one Russian exclave as an example, the authors have developed a simulation event-based long-term sustainability model. In the article, they provide the general characteristics of the model, describe its components, and simulation algorithm. The approach introduced in this article combines the traditional balance models and the peculiarities of an exclave region to develop a holistic regional economy model (with the Kaliningrad region serving as an example. It is important to underline that the resulting model helps to evaluate the degree of influence of preferential economic regimes (such as Free Customs Zone, for example on the economy of a region.

  19. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  20. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2014-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  1. The relationship between the key elements of Donabedian's conceptual model within the field of assistive technology

    DEFF Research Database (Denmark)

    Sund, Terje; Iwarsson, Susanne; Brandt, Åse

    2015-01-01

    Previous research has suggested that there is a relationship between the three key components of Donabedian's conceptual model for quality assessments: structure of care, process, and outcome of care. That is, structure predicted both process and outcome of care, and better processes predict better...

  2. Valuing snorkeling visits to the Florida Keys with stated and revealed preference models

    Science.gov (United States)

    Timothy Park; J. Michael Bowker; Vernon R. Leeworthy

    2002-01-01

    Coastal coral reefs, especially in the Florida Keys, are declining at a disturbing rate. Marine ecologists and reef scientists have emphasized the importance of establishing nonmarket values of coral reefs to assess the cost effectiveness of coral reef management and remediation programs. The purpose of this paper is to develop a travel cost--contingent valuation model...

  3. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    Science.gov (United States)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  4. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    checking (BMC) and inductive reasoning, it is verified that the generated model instance satisfies the generated safety properties. Using this method, we are able to verify the safety properties for model instances corresponding to railway networks of industrial size. Experiments show that BMC is also...

  5. A feature-based approach to modeling protein-protein interaction hot spots.

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-05-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.

  6. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  7. Nine key principles to guide youth mental health: development of service models in New South Wales.

    Science.gov (United States)

    Howe, Deborah; Batchelor, Samantha; Coates, Dominiek; Cashman, Emma

    2014-05-01

    Historically, the Australian health system has failed to meet the needs of young people with mental health problems and mental illness. In 2006, New South Wales (NSW) Health allocated considerable funds to the reform agenda of mental health services in NSW to address this inadequacy. Children and Young People's Mental Health (CYPMH), a service that provides mental health care for young people aged 12-24 years, with moderate to severe mental health problems, was chosen to establish a prototype Youth Mental Health (YMH) Service Model for NSW. This paper describes nine key principles developed by CYPMH to guide the development of YMH Service Models in NSW. A literature review, numerous stakeholder consultations and consideration of clinical best practice were utilized to inform the development of the key principles. Subsequent to their development, the nine key principles were formally endorsed by the Mental Health Program Council to ensure consistency and monitor the progress of YMH services across NSW. As a result, between 2008 and 2012 YMH Services across NSW regularly reported on their activities against each of the nine key principles demonstrating how each principle was addressed within their service. The nine key principles provide mental health services a framework for how to reorient services to accommodate YMH and provide a high-quality model of care. [Corrections added on 29 November 2013, after first online publication: The last two sentences of the Results section have been replaced with "As a result, between 2008 and 2012 YMH Services across NSW regularly reported on their activities against each of the nine key principles demonstrating how each principle was addressed within their service."]. © 2013 Wiley Publishing Asia Pty Ltd.

  8. A Web-Based Data Collection Platform for Multisite Randomized Behavioral Intervention Trials: Development, Key Software Features, and Results of a User Survey.

    Science.gov (United States)

    Modi, Riddhi A; Mugavero, Michael J; Amico, Rivet K; Keruly, Jeanne; Quinlivan, Evelyn Byrd; Crane, Heidi M; Guzman, Alfredo; Zinski, Anne; Montue, Solange; Roytburd, Katya; Church, Anna; Willig, James H

    2017-06-16

    Meticulous tracking of study data must begin early in the study recruitment phase and must account for regulatory compliance, minimize missing data, and provide high information integrity and/or reduction of errors. In behavioral intervention trials, participants typically complete several study procedures at different time points. Among HIV-infected patients, behavioral interventions can favorably affect health outcomes. In order to empower newly diagnosed HIV positive individuals to learn skills to enhance retention in HIV care, we developed the behavioral health intervention Integrating ENGagement and Adherence Goals upon Entry (iENGAGE) funded by the National Institute of Allergy and Infectious Diseases (NIAID), where we deployed an in-clinic behavioral health intervention in 4 urban HIV outpatient clinics in the United States. To scale our intervention strategy homogenously across sites, we developed software that would function as a behavioral sciences research platform. This manuscript aimed to: (1) describe the design and implementation of a Web-based software application to facilitate deployment of a multisite behavioral science intervention; and (2) report on results of a survey to capture end-user perspectives of the impact of this platform on the conduct of a behavioral intervention trial. In order to support the implementation of the NIAID-funded trial iENGAGE, we developed software to deploy a 4-site behavioral intervention for new clinic patients with HIV/AIDS. We integrated the study coordinator into the informatics team to participate in the software development process. Here, we report the key software features and the results of the 25-item survey to evaluate user perspectives on research and intervention activities specific to the iENGAGE trial (N=13). The key features addressed are study enrollment, participant randomization, real-time data collection, facilitation of longitudinal workflow, reporting, and reusability. We found 100% user

  9. Main features of the proposed NCRP respiratory tract model

    International Nuclear Information System (INIS)

    Phalen, R.F.; Fisher, G.L.; Moss, O.R.; Schlesinger, R.B.; Swift, D.L.

    1991-01-01

    The proposed NCRP respiratory tract dosimetry model regions include the naso-oro-pharyngo-laryngeal (NOPL), the tracheobronchial (TB), the pulmonary (P), and the lymph nodes (LN). Input aerosol concentrations are derived from a consideration of particle-size-dependent inspirability. Particle deposition in the respiratory tract is modelled using the mechanisms of inertial impaction, sedimentation and diffusion. The rates of absorption of particles, and transport to the blood, have been derived from clearance data from people and laboratory animals. The effect of body growth on particle deposition is considered. Particle clearance rates are assumed to be independent of age. The proposed respiratory tract model differs significantly from the 1966 Task Group Model in that (1) inspirability is considered; (2) new sub-regions of the respiratory tract are considered; (3) absorption of materials by the blood is treated in a more sophisticated fashion; and (4) body size (and thus age) is taken into account. (author)

  10. Modelling the cognitive and neuropathological features of schizophrenia with phencyclidine.

    Science.gov (United States)

    Reynolds, Gavin P; Neill, Joanna C

    2016-11-01

    Here, Reynolds and Neill describe the studies that preceded and followed publication of this paper, which reported a deficit in parvalbumin (PV), a calcium-binding protein found in GABA interneurons known to be reduced in schizophrenia patients, in conjunction with a deficit in reversal learning in an animal model for schizophrenia. This publication resulted from common research interests: Reynolds in the neurotransmitter pathology of schizophrenia, and Neill in developing animal models for schizophrenia symptomatology. The animal model, using a sub-chronic dosing regimen (sc) with the non-competitive NMDA receptor antagonist PCP (phencyclidine), evolved from previous work in rats (for PCP) and primates (for cognition). The hypothesis of a PV deficit came from emerging evidence for a GABAergic dysfunction in schizophrenia, in particular a deficit in PV-containing GABA interneurons. Since this original publication, a PV deficit has been identified in other animal models for schizophrenia, and the PV field has expanded considerably. This includes mechanistic work attempting to identify the link between oxidative stress and GABAergic dysfunction using this scPCP model, and assessment of the potential of the PV neuron as a target for new antipsychotic drugs. The latter has included development of a molecule targeting KV3.1 channels located on PV-containing GABA interneurons which can restore both PV expression and cognitive deficits in the scPCP model. © The Author(s) 2016.

  11. Key Feature of the Catalytic Cycle of TNF-α Converting Enzyme Involves Communication Between Distal Protein Sites and the Enzyme Catalytic Core

    International Nuclear Information System (INIS)

    Solomon, A.; Akabayov, B.; Frenkel, A.; Millas, M.; Sagi, I.

    2007-01-01

    Despite their key roles in many normal and pathological processes, the molecular details by which zinc-dependent proteases hydrolyze their physiological substrates remain elusive. Advanced theoretical analyses have suggested reaction models for which there is limited and controversial experimental evidence. Here we report the structure, chemistry and lifetime of transient metal-protein reaction intermediates evolving during the substrate turnover reaction of a metalloproteinase, the tumor necrosis factor-α converting enzyme (TACE). TACE controls multiple signal transduction pathways through the proteolytic release of the extracellular domain of a host of membrane-bound factors and receptors. Using stopped-flow x-ray spectroscopy methods together with transient kinetic analyses, we demonstrate that TACE's catalytic zinc ion undergoes dynamic charge transitions before substrate binding to the metal ion. This indicates previously undescribed communication pathways taking place between distal protein sites and the enzyme catalytic core. The observed charge transitions are synchronized with distinct phases in the reaction kinetics and changes in metal coordination chemistry mediated by the binding of the peptide substrate to the catalytic metal ion and product release. Here we report key local charge transitions critical for proteolysis as well as long sought evidence for the proposed reaction model of peptide hydrolysis. This study provides a general approach for gaining critical insights into the molecular basis of substrate recognition and turnover by zinc metalloproteinases that may be used for drug design

  12. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  13. Correlation between clinical and histological features in a pig model of choroidal neovascularization

    DEFF Research Database (Denmark)

    Lassota, Nathan; Kiilgaard, Jens Folke; Prause, Jan Ulrik

    2006-01-01

    To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features.......To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features....

  14. Improvements and new features in the IRI-2000 model

    International Nuclear Information System (INIS)

    Bilitza, D.

    2002-01-01

    This paper describes the changes that were implemented in the new version of the COSPAR/URSI International Reference Ionosphere (IRI-2000). These changes are: (1) two new options for the electron density in the D-region, (2) a better functional description of the electron density in the E-F merging region, (3) inclusion of the F1 layer occurrence probability as a new parameter, (4) a new model for the bottomside parameters B 0 and B 1 that greatly improves the representation at low and equatorial latitudes during high solar activities, (5) inclusion of a model for foF2 storm-time updating, (6) a new option for the electron temperature in the topside ionosphere, and (7) inclusion of a model for the equatorial F region ion drift. The main purpose of this paper is to provide the IRI users with examples of the effects of these changes. (author)

  15. Structural and Molecular Modeling Features of P2X Receptors

    Directory of Open Access Journals (Sweden)

    Luiz Anastacio Alves

    2014-03-01

    Full Text Available Currently, adenosine 5'-triphosphate (ATP is recognized as the extracellular messenger that acts through P2 receptors. P2 receptors are divided into two subtypes: P2Y metabotropic receptors and P2X ionotropic receptors, both of which are found in virtually all mammalian cell types studied. Due to the difficulty in studying membrane protein structures by X-ray crystallography or NMR techniques, there is little information about these structures available in the literature. Two structures of the P2X4 receptor in truncated form have been solved by crystallography. Molecular modeling has proven to be an excellent tool for studying ionotropic receptors. Recently, modeling studies carried out on P2X receptors have advanced our knowledge of the P2X receptor structure-function relationships. This review presents a brief history of ion channel structural studies and shows how modeling approaches can be used to address relevant questions about P2X receptors.

  16. The features of modelling semiconductor lasers with a wide contact

    Directory of Open Access Journals (Sweden)

    Rzhanov Alexey

    2017-01-01

    Full Text Available The aspects of calculating the dynamics and statics of powerful semiconductor laser diodes radiation are investigated. It takes into account the main physical mechanisms influencing power, spectral composition, far and near field of laser radiation. It outlines a dynamic distributed model of a semiconductor laser with a wide contact and possible algorithms for its implementation.

  17. Features of optical modeling in educational and scientific activity ...

    African Journals Online (AJOL)

    The article discusses the functionality of existing software for the modeling, analysis and optimization of lighting systems and optical elements, through which the stage of their design can be automated completely. The use of these programs is shown using the example of scientific work and the educational activity of ...

  18. Energy Demand Modeling Methodology of Key State Transitions of Turning Processes

    Directory of Open Access Journals (Sweden)

    Shun Jia

    2017-04-01

    Full Text Available Energy demand modeling of machining processes is the foundation of energy optimization. Energy demand of machining state transition is integral to the energy requirements of the machining process. However, research focus on energy modeling of state transition is scarce. To fill this gap, an energy demand modeling methodology of key state transitions of the turning process is proposed. The establishment of an energy demand model of state transition could improve the accuracy of the energy model of the machining process, which also provides an accurate model and reliable data for energy optimization of the machining process. Finally, case studies were conducted on a CK6153i CNC lathe, the results demonstrating that predictive accuracy with the proposed method is generally above 90% for the state transition cases.

  19. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    Science.gov (United States)

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  20. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  1. Body Dysmorphic Disorder: Neurobiological Features and an Updated Model

    Science.gov (United States)

    Li, Wei; Arienzo, Donatello; Feusner, Jamie D.

    2013-01-01

    Body Dysmorphic Disorder (BDD) affects approximately 2% of the population and involves misperceived defects of appearance along with obsessive preoccupation and compulsive behaviors. There is evidence of neurobiological abnormalities associated with symptoms in BDD, although research to date is still limited. This review covers the latest neuropsychological, genetic, neurochemical, psychophysical, and neuroimaging studies and synthesizes these findings into an updated (yet still preliminary) neurobiological model of the pathophysiology of BDD. We propose a model in which visual perceptual abnormalities, along with frontostriatal and limbic system dysfunction, may combine to contribute to the symptoms of impaired insight and obsessive thoughts and compulsive behaviors expressed in BDD. Further research is necessary to gain a greater understanding of the etiological formation of BDD symptoms and their evolution over time. PMID:25419211

  2. Main features of nucleation in model solutions of oral cavity

    Science.gov (United States)

    Golovanova, O. A.; Chikanova, E. S.; Punin, Yu. O.

    2015-05-01

    The regularities of nucleation in model solutions of oral cavity have been investigated, and the induction order and constants have been determined for two systems: saliva and dental plaque fluid (DPF). It is shown that an increase in the initial supersaturation leads to a transition from the heterogeneous nucleation of crystallites to a homogeneous one. Some additives are found to enhance nucleation: HCO{3/-} > C6H12O6 > F-, while others hinder this process: protein (casein) > Mg2+. It is established that crystallization in DPF occurs more rapidly and the DPF composition is favorable for the growth of small (52.6-26.1 μm) crystallites. On the contrary, the conditions implemented in the model saliva solution facilitate the formation of larger (198.4-41.8 μm) crystals.

  3. Boosting the discriminative power of color models for feature detection

    Science.gov (United States)

    Stokman, Harro M. G.; Gevers, Theo

    2005-01-01

    We consider the well-known problem of segmenting a color image into foreground-background pixels. Such result can be obtained by segmenting the red, green and blue channels directly. Alternatively, the result may be obtained through the transformation of the color image into other color spaces, such as HSV or normalized colors. The problem then is how to select the color space or color channel that produces the best segmentation result. Furthermore, if more than one channels are equally good candidates, the next problem is how to combine the results. In this article, we investigate if the principles of the formal model for diversification of Markowitz (1952) can be applied to solve the problem. We verify, in theory and in practice, that the proposed diversification model can be applied effectively to determine the most appropriate combination of color spaces for the application at hand.

  4. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  5. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  6. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  7. A decision support model for identification and prioritization of key performance indicators in the logistics industry

    OpenAIRE

    Kucukaltan, Berk; Irani, Zahir; Aktas, Emel

    2016-01-01

    Performance measurement of logistics companies is based upon various performance indicators. Yet, in the logistics industry, there are several vaguenesses, such as deciding on key indicators and determining interrelationships between performance indicators. In order to resolve these vaguenesses, this paper first presents the stakeholder-informed Balanced Scorecard (BSC) model, by incorporating financial (e.g. cost) and non-financial (e.g. social media) performance indicators, with a comprehen...

  8. Key factors regulating the mass delivery of macromolecules to model cell membranes

    DEFF Research Database (Denmark)

    Campbell, Richard A.; Watkins, Erik B.; Jagalski, Vivien

    2014-01-01

    We show that both gravity and electrostatics are key factors regulating interactions between model cell membranes and self-assembled liquid crystalline aggregates of dendrimers and phospholipids. The system is a proxy for the trafficking of reservoirs of therapeutic drugs to cell membranes for slow...... of the aggregates to activate endocytosis pathways on specific cell types is discussed in the context of targeted drug delivery applications....

  9. An Investigation of Feature Models for Music Genre Classification using the Support Vector Classifier

    DEFF Research Database (Denmark)

    Meng, Anders; Shawe-Taylor, John

    2005-01-01

    In music genre classification the decision time is typically of the order of several seconds however most automatic music genre classification systems focus on short time features derived from 10-50ms. This work investigates two models, the multivariate gaussian model and the multivariate...... probability kernel. In order to examine the different methods an 11 genre music setup was utilized. In this setup the Mel Frequency Cepstral Coefficients (MFCC) were used as short time features. The accuracy of the best performing model on this data set was 44% as compared to a human performance of 52...... autoregressive model for modelling short time features. Furthermore, it was investigated how these models can be integrated over a segment of short time features into a kernel such that a support vector machine can be applied. Two kernels with this property were considered, the convolution kernel and product...

  10. Gravitational wave background from Standard Model physics: qualitative features

    International Nuclear Information System (INIS)

    Ghiglieri, J.; Laine, M.

    2015-01-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors

  11. Identification of key residues for protein conformational transition using elastic network model.

    Science.gov (United States)

    Su, Ji Guo; Xu, Xian Jin; Li, Chun Hua; Chen, Wei Zu; Wang, Cun Xin

    2011-11-07

    Proteins usually undergo conformational transitions between structurally disparate states to fulfill their functions. The large-scale allosteric conformational transitions are believed to involve some key residues that mediate the conformational movements between different regions of the protein. In the present work, a thermodynamic method based on the elastic network model is proposed to predict the key residues involved in protein conformational transitions. In our method, the key functional sites are identified as the residues whose perturbations largely influence the free energy difference between the protein states before and after transition. Two proteins, nucleotide binding domain of the heat shock protein 70 and human/rat DNA polymerase β, are used as case studies to identify the critical residues responsible for their open-closed conformational transitions. The results show that the functionally important residues mainly locate at the following regions for these two proteins: (1) the bridging point at the interface between the subdomains that control the opening and closure of the binding cleft; (2) the hinge region between different subdomains, which mediates the cooperative motions between the corresponding subdomains; and (3) the substrate binding sites. The similarity in the positions of the key residues for these two proteins may indicate a common mechanism in their conformational transitions.

  12. Toward a model for lexical access based on acoustic landmarks and distinctive features

    Science.gov (United States)

    Stevens, Kenneth N.

    2002-04-01

    This article describes a model in which the acoustic speech signal is processed to yield a discrete representation of the speech stream in terms of a sequence of segments, each of which is described by a set (or bundle) of binary distinctive features. These distinctive features specify the phonemic contrasts that are used in the language, such that a change in the value of a feature can potentially generate a new word. This model is a part of a more general model that derives a word sequence from this feature representation, the words being represented in a lexicon by sequences of feature bundles. The processing of the signal proceeds in three steps: (1) Detection of peaks, valleys, and discontinuities in particular frequency ranges of the signal leads to identification of acoustic landmarks. The type of landmark provides evidence for a subset of distinctive features called articulator-free features (e.g., [vowel], [consonant], [continuant]). (2) Acoustic parameters are derived from the signal near the landmarks to provide evidence for the actions of particular articulators, and acoustic cues are extracted by sampling selected attributes of these parameters in these regions. The selection of cues that are extracted depends on the type of landmark and on the environment in which it occurs. (3) The cues obtained in step (2) are combined, taking context into account, to provide estimates of ``articulator-bound'' features associated with each landmark (e.g., [lips], [high], [nasal]). These articulator-bound features, combined with the articulator-free features in (1), constitute the sequence of feature bundles that forms the output of the model. Examples of cues that are used, and justification for this selection, are given, as well as examples of the process of inferring the underlying features for a segment when there is variability in the signal due to enhancement gestures (recruited by a speaker to make a contrast more salient) or due to overlap of gestures from

  13. Impact of SLA assimilation in the Sicily Channel Regional Model: model skills and mesoscale features

    Directory of Open Access Journals (Sweden)

    A. Olita

    2012-07-01

    Full Text Available The impact of the assimilation of MyOcean sea level anomalies along-track data on the analyses of the Sicily Channel Regional Model was studied. The numerical model has a resolution of 1/32° degrees and is capable to reproduce mesoscale and sub-mesoscale features. The impact of the SLA assimilation is studied by comparing a simulation (SIM, which does not assimilate data with an analysis (AN assimilating SLA along-track multi-mission data produced in the framework of MyOcean project. The quality of the analysis was evaluated by computing RMSE of the misfits between analysis background and observations (sea level before assimilation. A qualitative evaluation of the ability of the analyses to reproduce mesoscale structures is accomplished by comparing model results with ocean colour and SST satellite data, able to detect such features on the ocean surface. CTD profiles allowed to evaluate the impact of the SLA assimilation along the water column. We found a significant improvement for AN solution in terms of SLA RMSE with respect to SIM (the averaged RMSE of AN SLA misfits over 2 years is about 0.5 cm smaller than SIM. Comparison with CTD data shows a questionable improvement produced by the assimilation process in terms of vertical features: AN is better in temperature while for salinity it gets worse than SIM at the surface. This suggests that a better a-priori description of the vertical error covariances would be desirable. The qualitative comparison of simulation and analyses with synoptic satellite independent data proves that SLA assimilation allows to correctly reproduce some dynamical features (above all the circulation in the Ionian portion of the domain and mesoscale structures otherwise misplaced or neglected by SIM. Such mesoscale changes also infer that the eddy momentum fluxes (i.e. Reynolds stresses show major changes in the Ionian area. Changes in Reynolds stresses reflect a different pumping of eastward momentum from the eddy to

  14. Key Factors Influencing the Energy Absorption of Dual-Phase Steels: Multiscale Material Model Approach and Microstructural Optimization

    Science.gov (United States)

    Belgasam, Tarek M.; Zbib, Hussein M.

    2018-06-01

    The increase in use of dual-phase (DP) steel grades by vehicle manufacturers to enhance crash resistance and reduce body car weight requires the development of a clear understanding of the effect of various microstructural parameters on the energy absorption in these materials. Accordingly, DP steelmakers are interested in predicting the effect of various microscopic factors as well as optimizing microstructural properties for application in crash-relevant components of vehicle bodies. This study presents a microstructure-based approach using a multiscale material and structure model. In this approach, Digimat and LS-DYNA software were coupled and employed to provide a full micro-macro multiscale material model, which is then used to simulate tensile tests. Microstructures with varied ferrite grain sizes, martensite volume fractions, and carbon content in DP steels were studied. The impact of these microstructural features at different strain rates on energy absorption characteristics of DP steels is investigated numerically using an elasto-viscoplastic constitutive model. The model is implemented in a multiscale finite-element framework. A comprehensive statistical parametric study using response surface methodology is performed to determine the optimum microstructural features for a required tensile toughness at different strain rates. The simulation results are validated using experimental data found in the literature. The developed methodology proved to be effective for investigating the influence and interaction of key microscopic properties on the energy absorption characteristics of DP steels. Furthermore, it is shown that this method can be used to identify optimum microstructural conditions at different strain-rate conditions.

  15. Coupling process-based models and plant architectural models: A key issue for simulating crop production

    NARCIS (Netherlands)

    Reffye, de P.; Heuvelink, E.; Guo, Y.; Hu, B.G.; Zhang, B.G.

    2009-01-01

    Process-Based Models (PBMs) can successfully predict the impact of environmental factors (temperature, light, CO2, water and nutrients) on crop growth and yield. These models are used widely for yield prediction and optimization of water and nutrient supplies. Nevertheless, PBMs do not consider

  16. A feature-based approach to modeling protein–protein interaction hot spots

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-01-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to π–related interactions, especially π · · · π interactions. PMID:19273533

  17. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    International Nuclear Information System (INIS)

    J. J. Tappen

    2003-01-01

    The purpose of this revision of ''Evaluation of the Applicability of Biosphere-Related Features, Events, and Processes (FEPs)'' (BSC 2001) is to document the screening analysis of biosphere-related primary FEPs, as identified in ''The Development of Information Catalogued in REV00 of the YMP FEP Database'' (Freeze et al. 2001), in accordance with the requirements of the final U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR Part 63. This database is referred to as the Yucca Mountain Project (YMP) FEP Database throughout this document. Those biosphere-related primary FEPs that are screened as applicable will be used to develop the conceptual model portion of the biosphere model, which will in turn be used to develop the mathematical model portion of the biosphere model. As part of this revision, any reference to the screening guidance or criteria provided either by Dyer (1999) or by the proposed NRC regulations at 64 FR 8640 has been removed. The title of this revision has been changed to more accurately reflect the purpose of the analyses. In addition, this revision will address Item Numbers 19, 20, 21, 25, and 26 from Attachment 2 of ''U.S. Nuclear Regulatory Commission/U.S. Department of Energy Technical Exchange and Management Meeting on Total System Performance Assessment and Integration (August 6 through 10, 2001)'' (Reamer 2001). This Scientific Analysis Report (SAR) does not support the current revision to the YMP FEP Database (Freeze et al. 2001). Subsequent to the release of the YMP FEP Database (Freeze et al. 2001), a series of reviews was conducted on both the FEP processes used to support Total System Performance Assessment for Site Recommendation and to develop the YMP FEP Database. In response to observations and comments from these reviews, particularly the NRC/DOE TSPA Technical Exchange in August 2001 (Reamer 2001), several Key Technical Issue (KTI) Agreements were developed. ''The Enhanced Plan for Features, Events and Processes

  18. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    J. J. Tappen

    2003-02-16

    The purpose of this revision of ''Evaluation of the Applicability of Biosphere-Related Features, Events, and Processes (FEPs)'' (BSC 2001) is to document the screening analysis of biosphere-related primary FEPs, as identified in ''The Development of Information Catalogued in REV00 of the YMP FEP Database'' (Freeze et al. 2001), in accordance with the requirements of the final U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR Part 63. This database is referred to as the Yucca Mountain Project (YMP) FEP Database throughout this document. Those biosphere-related primary FEPs that are screened as applicable will be used to develop the conceptual model portion of the biosphere model, which will in turn be used to develop the mathematical model portion of the biosphere model. As part of this revision, any reference to the screening guidance or criteria provided either by Dyer (1999) or by the proposed NRC regulations at 64 FR 8640 has been removed. The title of this revision has been changed to more accurately reflect the purpose of the analyses. In addition, this revision will address Item Numbers 19, 20, 21, 25, and 26 from Attachment 2 of ''U.S. Nuclear Regulatory Commission/U.S. Department of Energy Technical Exchange and Management Meeting on Total System Performance Assessment and Integration (August 6 through 10, 2001)'' (Reamer 2001). This Scientific Analysis Report (SAR) does not support the current revision to the YMP FEP Database (Freeze et al. 2001). Subsequent to the release of the YMP FEP Database (Freeze et al. 2001), a series of reviews was conducted on both the FEP processes used to support Total System Performance Assessment for Site Recommendation and to develop the YMP FEP Database. In response to observations and comments from these reviews, particularly the NRC/DOE TSPA Technical Exchange in August 2001 (Reamer 2001), several Key Technical Issue (KTI) Agreements were developed

  19. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  20. A model of biological neuron with terminal chaos and quantum-like features

    International Nuclear Information System (INIS)

    Conte, Elio; Pierri, GianPaolo; Federici, Antonio; Mendolicchio, Leonardo; Zbilut, Joseph P.

    2006-01-01

    A model of biological neuron is proposed combining terminal dynamics with quantum-like mechanical features, assuming the spin to be an important entity in neurodynamics, and, in particular, in synaptic transmission

  1. Novel personalized pathway-based metabolomics models reveal key metabolic pathways for breast cancer diagnosis

    DEFF Research Database (Denmark)

    Huang, Sijia; Chong, Nicole; Lewis, Nathan

    2016-01-01

    diagnosis. We applied this method to predict breast cancer occurrence, in combination with correlation feature selection (CFS) and classification methods. Results: The resulting all-stage and early-stage diagnosis models are highly accurate in two sets of testing blood samples, with average AUCs (Area Under.......993. Moreover, important metabolic pathways, such as taurine and hypotaurine metabolism and the alanine, aspartate, and glutamate pathway, are revealed as critical biological pathways for early diagnosis of breast cancer. Conclusions: We have successfully developed a new type of pathway-based model to study...... metabolomics data for disease diagnosis. Applying this method to blood-based breast cancer metabolomics data, we have discovered crucial metabolic pathway signatures for breast cancer diagnosis, especially early diagnosis. Further, this modeling approach may be generalized to other omics data types for disease...

  2. The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Directory of Open Access Journals (Sweden)

    Mou’ath Hourani

    2016-06-01

    Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models.

  3. The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Directory of Open Access Journals (Sweden)

    Mou’ath Hourani

    2016-06-01

    Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models.

  4. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  5. Multilevel binomial logistic prediction model for malignant pulmonary nodules based on texture features of CT image

    International Nuclear Information System (INIS)

    Wang Huan; Guo Xiuhua; Jia Zhongwei; Li Hongkai; Liang Zhigang; Li Kuncheng; He Qian

    2010-01-01

    Purpose: To introduce multilevel binomial logistic prediction model-based computer-aided diagnostic (CAD) method of small solitary pulmonary nodules (SPNs) diagnosis by combining patient and image characteristics by textural features of CT image. Materials and methods: Describe fourteen gray level co-occurrence matrix textural features obtained from 2171 benign and malignant small solitary pulmonary nodules, which belongs to 185 patients. Multilevel binomial logistic model is applied to gain these initial insights. Results: Five texture features, including Inertia, Entropy, Correlation, Difference-mean, Sum-Entropy, and age of patients own aggregating character on patient-level, which are statistically different (P < 0.05) between benign and malignant small solitary pulmonary nodules. Conclusion: Some gray level co-occurrence matrix textural features are efficiently descriptive features of CT image of small solitary pulmonary nodules, which can profit diagnosis of earlier period lung cancer if combined patient-level characteristics to some extent.

  6. Genetically engineered rat gliomas: PDGF-driven tumor initiation and progression in tv-a transgenic rats recreate key features of human brain cancer.

    Directory of Open Access Journals (Sweden)

    Nina P Connolly

    Full Text Available Previously rodent preclinical research in gliomas frequently involved implantation of cell lines such as C6 and 9L into the rat brain. More recently, mouse models have taken over, the genetic manipulability of the mouse allowing the creation of genetically accurate models outweighed the disadvantage of its smaller brain size that limited time allowed for tumor progression. Here we illustrate a method that allows glioma formation in the rat using the replication competent avian-like sarcoma (RCAS virus / tumor virus receptor-A (tv-a transgenic system of post-natal cell type-specific gene transfer. The RCAS/tv-a model has emerged as a particularly versatile and accurate modeling technology by enabling spatial, temporal, and cell type-specific control of individual gene transformations and providing de novo formed glial tumors with distinct molecular subtypes mirroring human GBM. Nestin promoter-driven tv-a (Ntv-a transgenic Sprague-Dawley rat founder lines were created and RCAS PDGFA and p53 shRNA constructs were used to initiate intracranial brain tumor formation. Tumor formation and progression were confirmed and visualized by magnetic resonance imaging (MRI and spectroscopy. The tumors were analyzed using histopathological and immunofluorescent techniques. All experimental animals developed large, heterogeneous brain tumors that closely resembled human GBM. Median survival was 92 days from tumor initiation and 62 days from the first point of tumor visualization on MRI. Each tumor-bearing animal showed time dependent evidence of malignant progression to high-grade glioma by MRI and neurological examination. Post-mortem tumor analysis demonstrated the presence of several key characteristics of human GBM, including high levels of tumor cell proliferation, pseudopalisading necrosis, microvascular proliferation, invasion of tumor cells into surrounding tissues, peri-tumoral reactive astrogliosis, lymphocyte infiltration, presence of numerous tumor

  7. Choosing preclinical study models of diabetic retinopathy: key problems for consideration

    Science.gov (United States)

    Mi, Xue-Song; Yuan, Ti-Fei; Ding, Yong; Zhong, Jing-Xiang; So, Kwok-Fai

    2014-01-01

    Diabetic retinopathy (DR) is the most common complication of diabetes mellitus in the eye. Although the clinical treatment for DR has already developed to a relative high level, there are still many urgent problems that need to be investigated in clinical and basic science. Currently, many in vivo animal models and in vitro culture systems have been applied to solve these problems. Many approaches have also been used to establish different DR models. However, till now, there has not been a single study model that can clearly and exactly mimic the developmental process of the human DR. Choosing the suitable model is important, not only for achieving our research goals smoothly, but also, to better match with different experimental proposals in the study. In this review, key problems for consideration in choosing study models of DR are discussed. These problems relate to clinical relevance, different approaches for establishing models, and choice of different species of animals as well as of the specific in vitro culture systems. Attending to these considerations will deepen the understanding on current study models and optimize the experimental design for the final goal of preventing DR. PMID:25429204

  8. Key Issues in Modeling of Complex 3D Structures from Video Sequences

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.

  9. Backup key generation model for one-time password security protocol

    Science.gov (United States)

    Jeyanthi, N.; Kundu, Sourav

    2017-11-01

    The use of one-time password (OTP) has ushered new life into the existing authentication protocols used by the software industry. It introduced a second layer of security to the traditional username-password authentication, thus coining the term, two-factor authentication. One of the drawbacks of this protocol is the unreliability of the hardware token at the time of authentication. This paper proposes a simple backup key model that can be associated with the real world applications’user database, which would allow a user to circumvent the second authentication stage, in the event of unavailability of the hardware token.

  10. Key Process Uncertainties in Soil Carbon Dynamics: Comparing Multiple Model Structures and Observational Meta-analysis

    Science.gov (United States)

    Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.

    2016-12-01

    Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over

  11. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Science.gov (United States)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  12. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  13. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    Directory of Open Access Journals (Sweden)

    Sirinoot Boonsuk

    2014-01-01

    Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.

  14. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    Science.gov (United States)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  15. Feature selection model based on clustering and ranking in pipeline for microarray data

    Directory of Open Access Journals (Sweden)

    Barnali Sahu

    2017-01-01

    Full Text Available Most of the available feature selection techniques in the literature are classifier bound. It means a group of features tied to the performance of a specific classifier as applied in wrapper and hybrid approach. Our objective in this study is to select a set of generic features not tied to any classifier based on the proposed framework. This framework uses attribute clustering and feature ranking techniques in pipeline in order to remove redundant features. On each uncovered cluster, signal-to-noise ratio, t-statistics and significance analysis of microarray are independently applied to select the top ranked features. Both filter and evolutionary wrapper approaches have been considered for feature selection and the data set with selected features are given to ensemble of predefined statistically different classifiers. The class labels of the test data are determined using majority voting technique. Moreover, with the aforesaid objectives, this paper focuses on obtaining a stable result out of various classification models. Further, a comparative analysis has been performed to study the classification accuracy and computational time of the current approach and evolutionary wrapper techniques. It gives a better insight into the features and further enhancing the classification accuracy with less computational time.

  16. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  17. Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling

    Directory of Open Access Journals (Sweden)

    Nelson Peter

    2006-11-01

    Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.

  18. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  19. Development of generic key performance indicators for PMBOK® using a 3D project integration model

    Directory of Open Access Journals (Sweden)

    Craig Langston

    2013-12-01

    Full Text Available Since Martin Barnes’ so-called ‘iron triangle’ circa 1969, much debate has occurred over how best to describe the fundamental constraints that underpin project success. This paper develops a 3D project integration model for PMBOK® comprising core constraints of scope, cost, time and risk as a basis to propose six generic key performance indicators (KPIs that articulate successful project delivery. These KPIs are defined as value, efficiency, speed, innovation, complexity and impact and can each be measured objectively as ratios of the core constraints. An overall KPI (denoted as s3/ctr is also derived. The aim in this paper is to set out the case for such a model and to demonstrate how it can be employed to assess the performance of project teams in delivering successful outcomes at various stages in the project life cycle. As part of the model’s development, a new PMBOK® knowledge area concerning environmental management is advanced.

  20. Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion

    Directory of Open Access Journals (Sweden)

    Lijun Zhang

    2018-02-01

    Full Text Available Aiming at the pitting fault of deep groove ball bearing during service, this paper uses the vibration signal of five different states of deep groove ball bearing and extracts the relevant features, then uses a neural network to model the degradation for identifying and classifying the fault type. By comparing the effects of training samples with different capacities through performance indexes such as the accuracy and convergence speed, it is proven that an increase in the sample size can improve the performance of the model. Based on the polynomial fitting principle and Pearson correlation coefficient, fusion features based on the skewness index are proposed, and the performance improvement of the model after incorporating the fusion features is also validated. A comparison of the performance of the support vector machine (SVM model and the neural network model on this dataset is given. The research shows that neural networks have more potential for complex and high-volume datasets.

  1. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  2. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-09

    The purpose of this report is to document the evaluation of biosphere features, events, and processes (FEPs) that relate to the license application (LA) process as required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The evaluation determines whether specific biosphere-related FEPs should be included or excluded from consideration in the Total System Performance Assessment (TSPA). This analysis documents the technical basis for screening decisions as required at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. For FEPs that are included in the TSPA, this analysis provides a TSPA disposition, which summarizes how the FEP has been included and addressed in the TSPA model, and cites the analysis reports and model reports that provide the technical basis and description of its disposition. For FEPs that are excluded from the TSPA, this analysis report provides a screening argument, which identifies the basis for the screening decision (i.e., low probability, low consequence, or by regulation) and discusses the technical basis that supports that decision. In cases, where a FEP covers multiple technical areas and is shared with other FEP analysis reports, this analysis may provide only a partial technical basis for the screening of the FEP. The full technical basis for these shared FEPs is addressed collectively by all FEP analysis reports that cover technical disciplines sharing a FEP. FEPs must be included in the TSPA unless they can be excluded by low probability, low consequence, or regulation. A FEP can be excluded from the TSPA by low probability per 10 CFR 63.114(d) [DIRS 156605], by showing that it has less than one chance in 10,000 of occurring over 10,000 years (or an approximately equivalent annualized probability of 10{sup -8}). A FEP can be excluded from the TSPA by low consequence per 10 CFR 63.114 (e or f) [DIRS 156605], by showing that omitting the FEP would not significantly change the magnitude and

  4. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    The purpose of this report is to document the evaluation of biosphere features, events, and processes (FEPs) that relate to the license application (LA) process as required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The evaluation determines whether specific biosphere-related FEPs should be included or excluded from consideration in the Total System Performance Assessment (TSPA). This analysis documents the technical basis for screening decisions as required at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. For FEPs that are included in the TSPA, this analysis provides a TSPA disposition, which summarizes how the FEP has been included and addressed in the TSPA model, and cites the analysis reports and model reports that provide the technical basis and description of its disposition. For FEPs that are excluded from the TSPA, this analysis report provides a screening argument, which identifies the basis for the screening decision (i.e., low probability, low consequence, or by regulation) and discusses the technical basis that supports that decision. In cases, where a FEP covers multiple technical areas and is shared with other FEP analysis reports, this analysis may provide only a partial technical basis for the screening of the FEP. The full technical basis for these shared FEPs is addressed collectively by all FEP analysis reports that cover technical disciplines sharing a FEP. FEPs must be included in the TSPA unless they can be excluded by low probability, low consequence, or regulation. A FEP can be excluded from the TSPA by low probability per 10 CFR 63.114(d) [DIRS 156605], by showing that it has less than one chance in 10,000 of occurring over 10,000 years (or an approximately equivalent annualized probability of 10 -8 ). A FEP can be excluded from the TSPA by low consequence per 10 CFR 63.114 (e or f) [DIRS 156605], by showing that omitting the FEP would not significantly change the magnitude and

  5. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  6. Choosing preclinical study models of diabetic retinopathy: key problems for consideration

    Directory of Open Access Journals (Sweden)

    Mi XS

    2014-11-01

    Full Text Available Xue-Song Mi,1,2 Ti-Fei Yuan,3,4 Yong Ding,1 Jing-Xiang Zhong,1 Kwok-Fai So4,5 1Department of Ophthalmology, First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, People’s Republic of China; 2Department of Anatomy, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, People’s Republic of China; 3School of Psychology, Nanjing Normal University, Nanjing, People’s Republic of China; 4Department of Ophthalmology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong; 5Guangdong-Hongkong-Macau Institute of Central Nervous System, Jinan University, Guangzhou, People’s Republic of China Abstract: Diabetic retinopathy (DR is the most common complication of diabetes mellitus in the eye. Although the clinical treatment for DR has already developed to a relative high level, there are still many urgent problems that need to be investigated in clinical and basic science. Currently, many in vivo animal models and in vitro culture systems have been applied to solve these problems. Many approaches have also been used to establish different DR models. However, till now, there has not been a single study model that can clearly and exactly mimic the developmental process of the human DR. Choosing the suitable model is important, not only for achieving our research goals smoothly, but also, to better match with different experimental proposals in the study. In this review, key problems for consideration in choosing study models of DR are discussed. These problems relate to clinical relevance, different approaches for establishing models, and choice of different species of animals as well as of the specific in vitro culture systems. Attending to these considerations will deepen the understanding on current study models and optimize the experimental design for the final goal of preventing DR. Keywords: animal model, in vitro culture, ex vivo culture, neurovascular dysfunction

  7. The giant Jiaodong gold province: The key to a unified model for orogenic gold deposits?

    Directory of Open Access Journals (Sweden)

    David I. Groves

    2016-05-01

    Full Text Available Although the term orogenic gold deposit has been widely accepted for all gold-only lode-gold deposits, with the exception of Carlin-type deposits and rare intrusion-related gold systems, there has been continuing debate on their genesis. Early syngenetic models and hydrothermal models dominated by meteoric fluids are now clearly unacceptable. Magmatic-hydrothermal models fail to explain the genesis of orogenic gold deposits because of the lack of consistent spatially – associated granitic intrusions and inconsistent temporal relationships. The most plausible, and widely accepted, models involve metamorphic fluids, but the source of these fluids is hotly debated. Sources within deeper segments of the supracrustal successions hosting the deposits, the underlying continental crust, and subducted oceanic lithosphere and its overlying sediment wedge all have their proponents. The orogenic gold deposits of the giant Jiaodong gold province of China, in the delaminated North China Craton, contain ca. 120 Ma gold deposits in Precambrian crust that was metamorphosed over 2000 million years prior to gold mineralization. The only realistic source of fluid and gold is a subducted oceanic slab with its overlying sulfide-rich sedimentary package, or the associated mantle wedge. This could be viewed as an exception to a general metamorphic model where orogenic gold has been derived during greenschist- to amphibolite-facies metamorphism of supracrustal rocks: basaltic rocks in the Precambrian and sedimentary rocks in the Phanerozoic. Alternatively, if a holistic view is taken, Jiaodong can be considered the key orogenic gold province for a unified model in which gold is derived from late-orogenic metamorphic devolatilization of stalled subduction slabs and oceanic sediments throughout Earth history. The latter model satisfies all geological, geochronological, isotopic and geochemical constraints but the precise mechanisms of auriferous fluid release, like many

  8. Optimization of an individual re-identification modeling process using biometric features

    Energy Technology Data Exchange (ETDEWEB)

    Heredia-Langner, Alejandro; Amidan, Brett G.; Matzner, Shari; Jarman, Kristin H.

    2014-09-24

    We present results from the optimization of a re-identification process using two sets of biometric data obtained from the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database. The datasets contain real measurements of features for 2378 individuals in a standing (43 features) and seated (16 features) position. A genetic algorithm (GA) was used to search a large combinatorial space where different features are available between the probe (seated) and gallery (standing) datasets. Results show that optimized model predictions obtained using less than half of the 43 gallery features and data from roughly 16% of the individuals available produce better re-identification rates than two other approaches that use all the information available.

  9. Identifying Key Features, Cutting Edge Cloud Resources, and Artificial Intelligence Tools to Achieve User-Friendly Water Science in the Cloud

    Science.gov (United States)

    Pierce, S. A.

    2017-12-01

    Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case

  10. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    Science.gov (United States)

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  11. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  12. Construction Method of the Topographical Features Model for Underwater Terrain Navigation

    Directory of Open Access Journals (Sweden)

    Wang Lihui

    2015-09-01

    Full Text Available Terrain database is the reference basic for autonomous underwater vehicle (AUV to implement underwater terrain navigation (UTN functions, and is the important part of building topographical features model for UTN. To investigate the feasibility and correlation of a variety of terrain parameters as terrain navigation information metrics, this paper described and analyzed the underwater terrain features and topography parameters calculation method. Proposing a comprehensive evaluation method for terrain navigation information, and constructing an underwater navigation information analysis model, which is associated with topographic features. Simulation results show that the underwater terrain features, are associated with UTN information directly or indirectly, also affect the terrain matching capture probability and the positioning accuracy directly.

  13. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  14. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  15. Interpretive Structural Model of Key Performance Indicators for Sustainable Maintenance Evaluatian in Rubber Industry

    Science.gov (United States)

    Amrina, E.; Yulianto, A.

    2018-03-01

    Sustainable maintenance is a new challenge for manufacturing companies to realize sustainable development. In this paper, an interpretive structural model is developed to evaluate sustainable maintenance in the rubber industry. The initial key performance indicators (KPIs) is identified and derived from literature and then validated by academic and industry experts. As a result, three factors of economic, social, and environmental dividing into a total of thirteen indicators are proposed as the KPIs for sustainable maintenance evaluation in rubber industry. Interpretive structural modeling (ISM) methodology is applied to develop a network structure model of the KPIs consisting of three levels. The results show the economic factor is regarded as the basic factor, the social factor as the intermediate factor, while the environmental factor indicated to be the leading factor. Two indicators of social factor i.e. labor relationship, and training and education have both high driver and dependence power, thus categorized as the unstable indicators which need further attention. All the indicators of environmental factor and one indicator of social factor are indicated as the most influencing indicator. The interpretive structural model hoped can aid the rubber companies in evaluating sustainable maintenance performance.

  16. Computational modeling identifies key gene regulatory interactions underlying phenobarbital-mediated tumor promotion

    Science.gov (United States)

    Luisier, Raphaëlle; Unterberger, Elif B.; Goodman, Jay I.; Schwarz, Michael; Moggs, Jonathan; Terranova, Rémi; van Nimwegen, Erik

    2014-01-01

    Gene regulatory interactions underlying the early stages of non-genotoxic carcinogenesis are poorly understood. Here, we have identified key candidate regulators of phenobarbital (PB)-mediated mouse liver tumorigenesis, a well-characterized model of non-genotoxic carcinogenesis, by applying a new computational modeling approach to a comprehensive collection of in vivo gene expression studies. We have combined our previously developed motif activity response analysis (MARA), which models gene expression patterns in terms of computationally predicted transcription factor binding sites with singular value decomposition (SVD) of the inferred motif activities, to disentangle the roles that different transcriptional regulators play in specific biological pathways of tumor promotion. Furthermore, transgenic mouse models enabled us to identify which of these regulatory activities was downstream of constitutive androstane receptor and β-catenin signaling, both crucial components of PB-mediated liver tumorigenesis. We propose novel roles for E2F and ZFP161 in PB-mediated hepatocyte proliferation and suggest that PB-mediated suppression of ESR1 activity contributes to the development of a tumor-prone environment. Our study shows that combining MARA with SVD allows for automated identification of independent transcription regulatory programs within a complex in vivo tissue environment and provides novel mechanistic insights into PB-mediated hepatocarcinogenesis. PMID:24464994

  17. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  18. A product feature-based user-centric product search model

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    During the online shopping process, users would search for interesting products and quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations, accessories, and complementary products. We present in this paper a product feature-based user-centric model for product search involving in a...

  19. Does Your Terrestrial Model Capture Key Arctic-Boreal Relationships?: Functional Benchmarks in the ABoVE Model Benchmarking System

    Science.gov (United States)

    Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Schwalm, C. R.; Huntzinger, D. N.; Hantson, W.

    2017-12-01

    The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, vegetation / ecosystem structure and function, carbon pools and biogeochemistry, permafrost, and hydrology. We are continuing to develop the model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection is driven by matching observations and model outputs to the ABoVE indicators via the ABoVE Grid and Projection. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes two types of performance metrics to identify model strengths and weaknesses: standard metrics, based on the International Land Model Benchmarking (ILaMB) system, which relate a single observed variable to a single model output variable, and functional benchmarks, wherein the relationship of one variable to one or more variables (e.g. the dependence of vegetation structure on snow cover, the dependence of active layer thickness (ALT) on air temperature and snow cover) is ascertained in both observations and model outputs. This in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR.

  20. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  1. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Directory of Open Access Journals (Sweden)

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  2. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  3. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2014-01-01

    Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  4. Plant modeling as a key tool for nuclear I and C design and V and V

    International Nuclear Information System (INIS)

    Krasnov, V.; Sokolov, O.; Symkin, B.

    2006-01-01

    This paper summarizes an intensive experience of LvivORGRES in the design and implementation of the digital control systems at VVER-1000 and VVER-440 nuclear power plants in Ukraine and Bulgaria. This experience is applicable to the digital I and C upgrade projects for other types of reactor equipment as well as to the design and testing of new I and C systems for new constructions. LvivORGRES was recently involved in several modernization projects as a functional designer and, also, provided technical support and supervision during the factory and site acceptance testing. It is widely accepted and proved by the industry's practice that a level and quality of system validation at all design and implementation phases are key to the successful future operation of I and C systems. The plant control systems have some additional validation requirements in comparing with the information and monitoring systems. According to the Ukrainian nuclear regulation standards, the scope of the control system projects should include the close loop stability analysis at all unit modes of operation. Besides the control system algorithms verification and validation, it was necessary to determine the tuning parameters for the system and use them initially during the system commissioning. LvivORGRES has developed the Adaptive Plant Modeling process that was used as a key tool in all design stages of control system upgrade projects: Software engineering tests, Integrated system validation tests, Factory acceptance tests. The Plant Model was developed on a modular basis which allowed the testing of all primary and secondary side regulators for all unit modes of operation including transients and unit start-up and shutdown. The Plant Model has been adapted to each project's requirements. The use of the plant simulation provided technical bases for important project decisions and documents including among others: system test strategy, initial tuning parameters, training plan, etc. The Plant

  5. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-09-01

    Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.

  6. Feature selection, statistical modeling and its applications to universal JPEG steganalyzer

    Energy Technology Data Exchange (ETDEWEB)

    Jalan, Jaikishan [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Steganalysis deals with identifying the instances of medium(s) which carry a message for communication by concealing their exisitence. This research focuses on steganalysis of JPEG images, because of its ubiquitous nature and low bandwidth requirement for storage and transmission. JPEG image steganalysis is generally addressed by representing an image with lower-dimensional features such as statistical properties, and then training a classifier on the feature set to differentiate between an innocent and stego image. Our approach is two fold: first, we propose a new feature reduction technique by applying Mahalanobis distance to rank the features for steganalysis. Many successful steganalysis algorithms use a large number of features relative to the size of the training set and suffer from a ”curse of dimensionality”: large number of feature values relative to training data size. We apply this technique to state-of-the-art steganalyzer proposed by Tom´as Pevn´y (54) to understand the feature space complexity and effectiveness of features for steganalysis. We show that using our approach, reduced-feature steganalyzers can be obtained that perform as well as the original steganalyzer. Based on our experimental observation, we then propose a new modeling technique for steganalysis by developing a Partially Ordered Markov Model (POMM) (23) to JPEG images and use its properties to train a Support Vector Machine. POMM generalizes the concept of local neighborhood directionality by using a partial order underlying the pixel locations. We show that the proposed steganalyzer outperforms a state-of-the-art steganalyzer by testing our approach with many different image databases, having a total of 20000 images. Finally, we provide a software package with a Graphical User Interface that has been developed to make this research accessible to local state forensic departments.

  7. Antimicrobial Nanoplexes meet Model Bacterial Membranes: the key role of Cardiolipin

    Science.gov (United States)

    Marín-Menéndez, Alejandro; Montis, Costanza; Díaz-Calvo, Teresa; Carta, Davide; Hatzixanthis, Kostas; Morris, Christopher J.; McArthur, Michael; Berti, Debora

    2017-01-01

    Antimicrobial resistance to traditional antibiotics is a crucial challenge of medical research. Oligonucleotide therapeutics, such as antisense or Transcription Factor Decoys (TFDs), have the potential to circumvent current resistance mechanisms by acting on novel targets. However, their full translation into clinical application requires efficient delivery strategies and fundamental comprehension of their interaction with target bacterial cells. To address these points, we employed a novel cationic bolaamphiphile that binds TFDs with high affinity to form self-assembled complexes (nanoplexes). Confocal microscopy revealed that nanoplexes efficiently transfect bacterial cells, consistently with biological efficacy on animal models. To understand the factors affecting the delivery process, liposomes with varying compositions, taken as model synthetic bilayers, were challenged with nanoplexes and investigated with Scattering and Fluorescence techniques. Thanks to the combination of results on bacteria and synthetic membrane models we demonstrate for the first time that the prokaryotic-enriched anionic lipid Cardiolipin (CL) plays a key-role in the TFDs delivery to bacteria. Moreover, we can hypothesize an overall TFD delivery mechanism, where bacterial membrane reorganization with permeability increase and release of the TFD from the nanoplexes are the main factors. These results will be of great benefit to boost the development of oligonucleotides-based antimicrobials of superior efficacy.

  8. Cadmium-induced immune abnormality is a key pathogenic event in human and rat models of preeclampsia.

    Science.gov (United States)

    Zhang, Qiong; Huang, Yinping; Zhang, Keke; Huang, Yanjun; Yan, Yan; Wang, Fan; Wu, Jie; Wang, Xiao; Xu, Zhangye; Chen, Yongtao; Cheng, Xue; Li, Yong; Jiao, Jinyu; Ye, Duyun

    2016-11-01

    With increased industrial development, cadmium is an increasingly important environmental pollutant. Studies have identified various adverse effects of cadmium on human beings. However, the relationships between cadmium pollution and the pathogenesis of preeclampsia remain elusive. The objective of this study is to explore the effects of cadmium on immune system among preeclamptic patients and rats. The results showed that the cadmium levels in the peripheral blood of preeclamptic patients were significantly higher than those observed in normal pregnancy. Based on it, a novel rat model of preeclampsia was established by the intraperitoneal administration of cadmium chloride (CdCl2) (0.125 mg of Cd/kg body weight) on gestational days 9-14. Key features of preeclampsia, including hypertension, proteinuria, placental abnormalities and small foetal size, appeared in pregnant rats after the administration of low-dose of CdCl2. Cadmium increased immunoglobulin production, mainly angiotensin II type 1-receptor-agonistic autoantibodies (AT1-AA), by increasing the expression of activation-induced cytosine deaminase (AID) in B cells. AID is critical for the maturation of antibody and autoantibody responses. In addition, angiotensin II type 1-receptor-agonistic autoantibody, which emerged recently as a potential pathogenic contributor to PE, was responsible for the deposition of complement component 5 (C5) in kidneys of pregnant rats via angiotensin II type 1 receptor (AT1R) activation. C5a is a fragment of C5 that is released during C5 activation. Selectively interfering with C5a signalling by a complement C5a receptor-specific antagonist significantly attenuated hypertension and proteinuria in Cd-injected pregnant rats. Our results suggest that cadmium induces immune abnormalities that may be a key pathogenic contributor to preeclampsia and provide new insights into treatment strategies of preeclampsia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  10. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  11. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  12. Endogenous superoxide is a key effector of the oxygen sensitivity of a model obligate anaerobe.

    Science.gov (United States)

    Lu, Zheng; Sethu, Ramakrishnan; Imlay, James A

    2018-04-03

    It has been unclear whether superoxide and/or hydrogen peroxide play important roles in the phenomenon of obligate anaerobiosis. This question was explored using Bacteroides thetaiotaomicron , a major fermentative bacterium in the human gastrointestinal tract. Aeration inactivated two enzyme families-[4Fe-4S] dehydratases and nonredox mononuclear iron enzymes-whose homologs, in contrast, remain active in aerobic Escherichia coli Inactivation-rate measurements of one such enzyme, B. thetaiotaomicron fumarase, showed that it is no more intrinsically sensitive to oxidants than is an E. coli fumarase. Indeed, when the E. coli enzymes were expressed in B. thetaiotaomicron , they no longer could tolerate aeration; conversely, the B. thetaiotaomicron enzymes maintained full activity when expressed in aerobic E. coli Thus, the aerobic inactivation of the B. thetaiotaomicron enzymes is a feature of their intracellular environment rather than of the enzymes themselves. B. thetaiotaomicron possesses superoxide dismutase and peroxidases, and it can repair damaged enzymes. However, measurements confirmed that the rate of reactive oxygen species production inside aerated B. thetaiotaomicron is far higher than in E. coli Analysis of the damaged enzymes recovered from aerated B. thetaiotaomicron suggested that they had been inactivated by superoxide rather than by hydrogen peroxide. Accordingly, overproduction of superoxide dismutase substantially protected the enzymes from aeration. We conclude that when this anaerobe encounters oxygen, its internal superoxide levels rise high enough to inactivate key catabolic and biosynthetic enzymes. Superoxide thus comprises a major element of the oxygen sensitivity of this anaerobe. The extent to which molecular oxygen exerts additional direct effects remains to be determined.

  13. Business models of sharing economy companies : exploring features responsible for sharing economy companies’ internationalization

    OpenAIRE

    Kosintceva, Aleksandra

    2016-01-01

    This paper is dedicated to the sharing economy business models and their features responsible for internationalization. The study proposes derived definitions for the concepts of “sharing economy” and “business model” and first generic sharing economy business models typology. The typology was created through the qualitative analysis of secondary data on twenty sharing economy companies from nine different industries. The outlined categories of sharing economy business models a...

  14. Test and lower bound modeling of keyed shear connections in RC shear walls

    DEFF Research Database (Denmark)

    Sørensen, Jesper Harrild; Herfelt, Morten Andersen; Hoang, Linh Cao

    2018-01-01

    This paper presents an investigation into the ultimate behavior of a recently developed design for keyed shear connections. The influence of the key depth on the failure mode and ductility of the connection has been studied by push-off tests. The tests showed that connections with larger key...

  15. Representing Microbial Dormancy in Soil Decomposition Models Improves Model Performance and Reveals Key Ecosystem Controls on Microbial Activity

    Science.gov (United States)

    He, Y.; Yang, J.; Zhuang, Q.; Wang, G.; Liu, Y.

    2014-12-01

    Climate feedbacks from soils can result from environmental change and subsequent responses of plant and microbial communities and nutrient cycling. Explicit consideration of microbial life history traits and strategy may be necessary to predict climate feedbacks due to microbial physiology and community changes and their associated effect on carbon cycling. In this study, we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of dormancy at six temperate forest sites with observed soil efflux ranged from 4 to 10 years across different forest types. We then extrapolated the model to all temperate forests in the Northern Hemisphere (25-50°N) to investigate spatial controls on microbial and soil C dynamics. Both models captured the observed soil heterotrophic respiration (RH), yet no-dormancy model consistently exhibited large seasonal amplitude and overestimation in microbial biomass. Spatially, the total RH from temperate forests based on dormancy model amounts to 6.88PgC/yr, and 7.99PgC/yr based on no-dormancy model. However, no-dormancy model notably overestimated the ratio of microbial biomass to SOC. Spatial correlation analysis revealed key controls of soil C:N ratio on the active proportion of microbial biomass, whereas local dormancy is primarily controlled by soil moisture and temperature, indicating scale-dependent environmental and biotic controls on microbial and SOC dynamics. These developments should provide essential support to modeling future soil carbon dynamics and enhance the avenue for collaboration between empirical soil experiment and modeling in the sense that more microbial physiological measurements are needed to better constrain and evaluate the models.

  16. Phospholipase A₂: the key to reversing long-term memory impairment in a gastropod model of aging.

    Science.gov (United States)

    Watson, Shawn N; Wright, Natasha; Hermann, Petra M; Wildering, Willem C

    2013-02-01

    Memory failure associated with changes in neuronal circuit functions rather than cell death is a common feature of normal aging in diverse animal species. The (neuro)biological foundations of this phenomenon are not well understood although oxidative stress, particularly in the guise of lipid peroxidation, is suspected to play a key role. Using an invertebrate model system of age-associated memory impairment that supports direct correlation between behavioral deficits and changes in the underlying neural substrate, we show that inhibition of phospholipase A(2) (PLA(2)) abolishes both long-term memory (LTM) and neural defects observed in senescent subjects and subjects exposed to experimental oxidative stress. Using a combination of behavioral assessments and electrophysiological techniques, we provide evidence for a close link between lipid peroxidation, provocation of phospholipase A(2)-dependent free fatty acid release, decline of neuronal excitability, and age-related long-term memory impairments. This supports the view that these processes suspend rather than irreversibly extinguish the aging nervous system's intrinsic capacity for plasticity. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Metamorphic rock-hosted orogenic gold deposit style at Bombana (Southeast Sulawesi and Buru Island (Maluku: Their key features and significances for gold exploration in Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Arifudin Idrus

    2017-06-01

    are identified. Early quartz veins are segmented, sigmoidal discontinuous and parallel to the foliation of the host rock. This generation of quartz veins is characterized by crystalline relatively clear quartz, and weakly mineralized with low sulfide and gold contents. The second type of quartz veins occurs within the ‘mineralized zone’ of about 100 m in width and ~1,000 m in length. Gold mineralization is intensely overprinted by argillic alteration. The mineralization-alteration zone is probably parallel to the mica schist foliation and strongly controlled by N-S or NE-SW-trending structures. Gold-bearing quartz veins are characterized by banded texture particularly following host rock foliation and sulphide banding, brecciated and rare bladed-like texture. Alteration types consist of propylitic (chlorite, calcite, sericite, argillic and carbonation represented by graphite banding and carbon flakes. Ore mineral comprises pyrite, native gold, pyrrhotite, and arsenopyrite. Cinnabar and stibnite are present in association with gold. Ore chemistry indicates that 11 out of 15 samples yielded more than 1 g/t Au, in which 6 of them graded in excess of 3 g/t Au. All high-grade samples are composed of limonite or partly contain limonitic material. This suggests the process of supergene enrichment. Interestingly, most of the high-grade samples contain also high concentrations of As (up to 991ppm, Sb (up to 885ppm, and Hg (up to 75ppm. Fluid inclusions in both quartz vein types consist of 4 phases including L-rich, V-rich, L-V-rich and L1-L2-V (CO2-rich phases. The mineralizing hydrothermal fluid typically is CO2-rich, of moderate temperature (300-400 ºC, and low salinity (0.36 to 0.54 wt.% NaCl eq. Based on those key features, gold mineralization in Bombana and Buru Island tends to meet the characteristics of orogenic, mesothermal types of gold deposit. Metamorphic rock-hosted gold deposits could represent the new targets for gold exploration particularly in Eastern

  18. Key data elements for use in cost-utility modeling of biological treatments for rheumatoid arthritis.

    Science.gov (United States)

    Ganz, Michael L; Hansen, Brian Bekker; Valencia, Xavier; Strandberg-Larsen, Martin

    2015-05-01

    Economic evaluation is becoming more common and important as new biologic therapies for rheumatoid arthritis (RA) are developed. While much has been published about how to design cost-utility models for RA to conduct these evaluations, less has been written about the sources of data populating those models. The goal is to review the literature and to provide recommendations for future data collection efforts. This study reviewed RA cost-utility models published between January 2006 and February 2014 focusing on five key sources of data (health-related quality-of-life and utility, clinical outcomes, disease progression, course of treatment, and healthcare resource use and costs). It provided recommendations for collecting the appropriate data during clinical and other studies to support modeling of biologic treatments for RA. Twenty-four publications met the selection criteria. Almost all used two steps to convert clinical outcomes data to utilities rather than more direct methods; most did not use clinical outcomes measures that captured absolute levels of disease activity and physical functioning; one-third of them, in contrast with clinical reality, assumed zero disease progression for biologic-treated patients; little more than half evaluated courses of treatment reflecting guideline-based or actual clinical care; and healthcare resource use and cost data were often incomplete. Based on these findings, it is recommended that future studies collect clinical outcomes and health-related quality-of-life data using appropriate instruments that can convert directly to utilities; collect data on actual disease progression; be designed to capture real-world courses of treatment; and collect detailed data on a wide range of healthcare resources and costs.

  19. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

  20. A new discrete dynamic model of ABA-induced stomatal closure predicts key feedback loops.

    Directory of Open Access Journals (Sweden)

    Réka Albert

    2017-09-01

    Full Text Available Stomata, microscopic pores in leaf surfaces through which water loss and carbon dioxide uptake occur, are closed in response to drought by the phytohormone abscisic acid (ABA. This process is vital for drought tolerance and has been the topic of extensive experimental investigation in the last decades. Although a core signaling chain has been elucidated consisting of ABA binding to receptors, which alleviates negative regulation by protein phosphatases 2C (PP2Cs of the protein kinase OPEN STOMATA 1 (OST1 and ultimately results in activation of anion channels, osmotic water loss, and stomatal closure, over 70 additional components have been identified, yet their relationships with each other and the core components are poorly elucidated. We integrated and processed hundreds of disparate observations regarding ABA signal transduction responses underlying stomatal closure into a network of 84 nodes and 156 edges and, as a result, established those relationships, including identification of a 36-node, strongly connected (feedback-rich component as well as its in- and out-components. The network's domination by a feedback-rich component may reflect a general feature of rapid signaling events. We developed a discrete dynamic model of this network and elucidated the effects of ABA plus knockout or constitutive activity of 79 nodes on both the outcome of the system (closure and the status of all internal nodes. The model, with more than 1024 system states, is far from fully determined by the available data, yet model results agree with existing experiments in 82 cases and disagree in only 17 cases, a validation rate of 75%. Our results reveal nodes that could be engineered to impact stomatal closure in a controlled fashion and also provide over 140 novel predictions for which experimental data are currently lacking. Noting the paucity of wet-bench data regarding combinatorial effects of ABA and internal node activation, we experimentally confirmed

  1. A new discrete dynamic model of ABA-induced stomatal closure predicts key feedback loops.

    Science.gov (United States)

    Albert, Réka; Acharya, Biswa R; Jeon, Byeong Wook; Zañudo, Jorge G T; Zhu, Mengmeng; Osman, Karim; Assmann, Sarah M

    2017-09-01

    Stomata, microscopic pores in leaf surfaces through which water loss and carbon dioxide uptake occur, are closed in response to drought by the phytohormone abscisic acid (ABA). This process is vital for drought tolerance and has been the topic of extensive experimental investigation in the last decades. Although a core signaling chain has been elucidated consisting of ABA binding to receptors, which alleviates negative regulation by protein phosphatases 2C (PP2Cs) of the protein kinase OPEN STOMATA 1 (OST1) and ultimately results in activation of anion channels, osmotic water loss, and stomatal closure, over 70 additional components have been identified, yet their relationships with each other and the core components are poorly elucidated. We integrated and processed hundreds of disparate observations regarding ABA signal transduction responses underlying stomatal closure into a network of 84 nodes and 156 edges and, as a result, established those relationships, including identification of a 36-node, strongly connected (feedback-rich) component as well as its in- and out-components. The network's domination by a feedback-rich component may reflect a general feature of rapid signaling events. We developed a discrete dynamic model of this network and elucidated the effects of ABA plus knockout or constitutive activity of 79 nodes on both the outcome of the system (closure) and the status of all internal nodes. The model, with more than 1024 system states, is far from fully determined by the available data, yet model results agree with existing experiments in 82 cases and disagree in only 17 cases, a validation rate of 75%. Our results reveal nodes that could be engineered to impact stomatal closure in a controlled fashion and also provide over 140 novel predictions for which experimental data are currently lacking. Noting the paucity of wet-bench data regarding combinatorial effects of ABA and internal node activation, we experimentally confirmed several predictions

  2. The Importance of Representing Certain Key Vegetation Canopy Processes Explicitly in a Land Surface Model

    Science.gov (United States)

    Napoly, A.; Boone, A. A.; Martin, E.; Samuelsson, P.

    2015-12-01

    Land surface models are moving to more detailed vegetation canopy descriptions in order to better represent certain key processes, such as Carbon dynamics and snowpack evolution. Since such models are usually applied within coupled numerical weather prediction or spatially distributed hydrological models, these improvements must strike a balance between computational cost and complexity. The consequences of simplified or composite canopy approaches can be manifested in terms of increased errors with respect to soil temperatures, estimates of the diurnal cycle of the turbulent fluxes or snow canopy interception and melt. Vegetated areas and particularly forests are modeled in a quite simplified manner in the ISBA land surface model. However, continuous developments of surface processes now require a more accurate description of the canopy. A new version of the the model now includes a multi energy balance (MEB) option to explicitly represent the canopy and the forest floor. It will be shown that certain newly included processes such as the shading effect of the vegetation, the explicit heat capacity of the canopy, and the insulating effect of the forest floor turn out to be essential. A detailed study has been done for four French forested sites. It was found that the MEB option significantly improves the ground heat flux (RMSE decrease from 50W/m2 to 10W/m2 on average) and soil temperatures when compared against measurements. Also the sensible heat flux calculation was improved primarily owing to a better phasing with the solar insulation owing to a lower vegetation heat capacity. However, the total latent heat flux is less modified compared to the classical ISBA simulation since it is more related to water uptake and the formulation of the stomatal resistance (which are unchanged). Next, a benchmark over 40 Fluxnet sites (116 cumulated years) was performed and compared with results from the default composite soil-vegetation version of ISBA. The results show

  3. Predictive model identifies key network regulators of cardiomyocyte mechano-signaling.

    Directory of Open Access Journals (Sweden)

    Philip M Tan

    2017-11-01

    Full Text Available Mechanical strain is a potent stimulus for growth and remodeling in cells. Although many pathways have been implicated in stretch-induced remodeling, the control structures by which signals from distinct mechano-sensors are integrated to modulate hypertrophy and gene expression in cardiomyocytes remain unclear. Here, we constructed and validated a predictive computational model of the cardiac mechano-signaling network in order to elucidate the mechanisms underlying signal integration. The model identifies calcium, actin, Ras, Raf1, PI3K, and JAK as key regulators of cardiac mechano-signaling and characterizes crosstalk logic imparting differential control of transcription by AT1R, integrins, and calcium channels. We find that while these regulators maintain mostly independent control over distinct groups of transcription factors, synergy between multiple pathways is necessary to activate all the transcription factors necessary for gene transcription and hypertrophy. We also identify a PKG-dependent mechanism by which valsartan/sacubitril, a combination drug recently approved for treating heart failure, inhibits stretch-induced hypertrophy, and predict further efficacious pairs of drug targets in the network through a network-wide combinatorial search.

  4. Developmental programming: the concept, large animal models, and the key role of uteroplacental vascular development.

    Science.gov (United States)

    Reynolds, L P; Borowicz, P P; Caton, J S; Vonnahme, K A; Luther, J S; Hammer, C J; Maddock Carlin, K R; Grazul-Bilska, A T; Redmer, D A

    2010-04-01

    Developmental programming refers to the programming of various bodily systems and processes by a stressor of the maternal system during pregnancy or during the neonatal period. Such stressors include nutritional stress, multiple pregnancy (i.e., increased numbers of fetuses in the gravid uterus), environmental stress (e.g., high environmental temperature, high altitude, prenatal steroid exposure), gynecological immaturity, and maternal or fetal genotype. Programming refers to impaired function of numerous bodily systems or processes, leading to poor growth, altered body composition, metabolic dysfunction, and poor productivity (e.g., poor growth, reproductive dysfunction) of the offspring throughout their lifespan and even across generations. A key component of developmental programming seems to be placental dysfunction, leading to altered fetal growth and development. We discuss various large animal models of developmental programming and how they have and will continue to contribute to our understanding of the mechanisms underlying altered placental function and developmental programming, and, further, how large animal models also will be critical to the identification and application of therapeutic strategies that will alleviate the negative consequences of developmental programming to improve offspring performance in livestock production and human medicine.

  5. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Science.gov (United States)

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  6. Key parameters of the sediment surface morphodynamics in an estuary - An assessment of model solutions

    Science.gov (United States)

    Sampath, D. M. R.; Boski, T.

    2018-05-01

    Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost

  7. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    Science.gov (United States)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to

  8. At-line monitoring of key parameters of nisin fermentation by near infrared spectroscopy, chemometric modeling and model improvement.

    Science.gov (United States)

    Guo, Wei-Liang; Du, Yi-Ping; Zhou, Yong-Can; Yang, Shuang; Lu, Jia-Hui; Zhao, Hong-Yu; Wang, Yao; Teng, Li-Rong

    2012-03-01

    An analytical procedure has been developed for at-line (fast off-line) monitoring of 4 key parameters including nisin titer (NT), the concentration of reducing sugars, cell concentration and pH during a nisin fermentation process. This procedure is based on near infrared (NIR) spectroscopy and Partial Least Squares (PLS). Samples without any preprocessing were collected at intervals of 1 h during fifteen batch of fermentations. These fermentation processes were implemented in 3 different 5 l fermentors at various conditions. NIR spectra of the samples were collected in 10 min. And then, PLS was used for modeling the relationship between NIR spectra and the key parameters which were determined by reference methods. Monte Carlo Partial Least Squares (MCPLS) was applied to identify the outliers and select the most efficacious methods for preprocessing spectra, wavelengths and the suitable number of latent variables (n (LV)). Then, the optimum models for determining NT, concentration of reducing sugars, cell concentration and pH were established. The correlation coefficients of calibration set (R (c)) were 0.8255, 0.9000, 0.9883 and 0.9581, respectively. These results demonstrated that this method can be successfully applied to at-line monitor of NT, concentration of reducing sugars, cell concentration and pH during nisin fermentation processes.

  9. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  10. Finite element modeling of small-scale tapered wood-laminated composite poles with biomimicry features

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; R.C. Tang; Chung Y. Hse

    2008-01-01

    Tapered composite poles with biomimicry features as in bamboo are a new generation of wood laminated composite poles that may some day be considered as an alternative to solid wood poles that are widely used in the transmission and telecommunication fields. Five finite element models were developed with ANSYS to predict and assess the performance of five types of...

  11. Comparison of the Features of EPUB E-Book and SCORM E-Learning Content Model

    Science.gov (United States)

    Chang, Hsuan-Pu; Hung, Jason C.

    2018-01-01

    E-books nowadays have greatly evolved in its presentation and functions, however its features for education need to be investigated and inspired because people who are accustomed to using printed books may consider and approach it in the same way as they do printed ones. Therefore, the authors compared the EPUB e-book content model with the SCORM…

  12. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  13. Observations and models of star formation in the tidal features of interacting galaxies

    International Nuclear Information System (INIS)

    Wallin, J.F.; Schombert, J.M.; Struck-Marcell, C.

    1990-01-01

    Multi-color surface photometry (BVri) is presented for the tidal features in a sample of interacting galaxies. Large color variations are found between the morphological components and within the individual components. The blue colors in the primary and the tidal features are most dramatic in B-V, and not in V-i, indicating that star formation instead of metallicity or age dominates the colors. Color variations between components is larger in systems shortly after interaction begins and diminishes to a very low level in systems which are merged. Photometric models for interacting systems are presented which suggest that a weak burst of star formation in the tidal features could cause the observed color distributions. Dynamical models indicate that compression occurs during the development of tidal features causing an increase in the local density by a factor of between 1.5 and 5. Assuming this density increase can be related to the star formation rate by a Schmidt law, the density increases observed in the dynamical models may be responsible for the variations in color seen in some of the interacting systems. Limitations of the dynamical models are also discussed

  14. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    Science.gov (United States)

    Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.

    2015-04-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.

  15. Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data

    Science.gov (United States)

    Abdolghafoorian, A.; Farhadi, L.

    2017-12-01

    Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and

  16. A Public-key based Information Management Model for Mobile Agents

    OpenAIRE

    Rodriguez, Diego; Sobrado, Igor

    2000-01-01

    Mobile code based computing requires development of protection schemes that allow digital signature and encryption of data collected by the agents in untrusted hosts. These algorithms could not rely on carrying encryption keys if these keys could be stolen or used to counterfeit data by hostile hosts and agents. As a consequence, both information and keys must be protected in a way that only authorized hosts, that is the host that provides information and the server that has sent the mobile a...

  17. Brain Transcriptome Profiles in Mouse Model Simulating Features of Post-traumatic Stress Disorder

    Science.gov (United States)

    2015-02-28

    analyses of DEGs suggested pos- sible roles in anxiety-related behavioral responses, synaptic plasticity, neurogenesis, inflammation, obesity...Behavioral evaluation of mouse model We established [29] a rodent model manifesting PTSD- like behavioral features. We believe that, because the stres - sor...hippo- campus (HC), medial prefrontal cortex (MPFC) play primary roles in fear learning and memory, and thus, may contribute to the behavioral

  18. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  19. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  20. Modelling management process of key drivers for economic sustainability in the modern conditions of economic development

    Directory of Open Access Journals (Sweden)

    Pishchulina E.S.

    2017-01-01

    Full Text Available The text is about issues concerning the management of driver for manufacturing enterprise economic sustainability and manufacturing enterprise sustainability assessment as the key aspect of the management of enterprise economic sustainability. The given issues become topical as new requirements for the methods of manufacturing enterprise management in the modern conditions of market economy occur. An economic sustainability model that is considered in the article is an integration of enterprise economic growth, economic balance of external and internal environment and economic sustainability. The method of assessment of economic sustainability of a manufacturing enterprise proposed in the study allows to reveal some weaknesses in the enterprise performance, and untapped reserves, which can be further used to improve the economic sustainability and efficiency of the enterprise. The management of manufacturing enterprise economic sustainability is one of the most important factors of business functioning and development in modern market economy. The relevance of this trend is increasing in accordance with the objective requirements of the growing volumes of production and sale, the increasing complexity of economic relations, changing external environment of an enterprise.

  1. iPSC-Based Models to Unravel Key Pathogenetic Processes Underlying Motor Neuron Disease Development

    Directory of Open Access Journals (Sweden)

    Irene Faravelli

    2014-10-01

    Full Text Available Motor neuron diseases (MNDs are neuromuscular disorders affecting rather exclusively upper motor neurons (UMNs and/or lower motor neurons (LMNs. The clinical phenotype is characterized by muscular weakness and atrophy leading to paralysis and almost invariably death due to respiratory failure. Adult MNDs include sporadic and familial amyotrophic lateral sclerosis (sALS-fALS, while the most common infantile MND is represented by spinal muscular atrophy (SMA. No effective treatment is ccurrently available for MNDs, as for the vast majority of neurodegenerative disorders, and cures are limited to supportive care and symptom relief. The lack of a deep understanding of MND pathogenesis accounts for the difficulties in finding a cure, together with the scarcity of reliable in vitro models. Recent progresses in stem cell field, in particular in the generation of induced Pluripotent Stem Cells (iPSCs has made possible for the first time obtaining substantial amounts of human cells to recapitulate in vitro some of the key pathogenetic processes underlying MNDs. In the present review, recently published studies involving the use of iPSCs to unravel aspects of ALS and SMA pathogenesis are discussed with an overview of their implications in the process of finding a cure for these still orphan disorders.

  2. A neural network model of semantic memory linking feature-based object representation and words.

    Science.gov (United States)

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  3. Maximum Key Size and Classification Performance of Fuzzy Commitment for Gaussian Modeled Biometric Sources

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Breebaart, J.; Buhan, I.R.; Veldhuis, Raymond N.J.

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from, or binding a key to the binary vector derived from the

  4. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Breebaart, Jeroen; Buhan, I.R.; Veldhuis, Raymond N.J.; Vijaya Kumar, B.V.K.; Prabhakar, Salil; Ross, Arun A.

    2010-01-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved

  5. Different developmental trajectories across feature types support a dynamic field model of visual working memory development.

    Science.gov (United States)

    Simmering, Vanessa R; Miller, Hilary E; Bohache, Kevin

    2015-05-01

    Research on visual working memory has focused on characterizing the nature of capacity limits as "slots" or "resources" based almost exclusively on adults' performance with little consideration for developmental change. Here we argue that understanding how visual working memory develops can shed new light onto the nature of representations. We present an alternative model, the Dynamic Field Theory (DFT), which can capture effects that have been previously attributed either to "slot" or "resource" explanations. The DFT includes a specific developmental mechanism to account for improvements in both resolution and capacity of visual working memory throughout childhood. Here we show how development in the DFT can account for different capacity estimates across feature types (i.e., color and shape). The current paper tests this account by comparing children's (3, 5, and 7 years of age) performance across different feature types. Results showed that capacity for colors increased faster over development than capacity for shapes. A second experiment confirmed this difference across feature types within subjects, but also showed that the difference can be attenuated by testing memory for less familiar colors. Model simulations demonstrate how developmental changes in connectivity within the model-purportedly arising through experience-can capture differences across feature types.

  6. Model-independent phenotyping of C. elegans locomotion using scale-invariant feature transform.

    Directory of Open Access Journals (Sweden)

    Yelena Koren

    Full Text Available To uncover the genetic basis of behavioral traits in the model organism C. elegans, a common strategy is to study locomotion defects in mutants. Despite efforts to introduce (semi-automated phenotyping strategies, current methods overwhelmingly depend on worm-specific features that must be hand-crafted and as such are not generalizable for phenotyping motility in other animal models. Hence, there is an ongoing need for robust algorithms that can automatically analyze and classify motility phenotypes quantitatively. To this end, we have developed a fully-automated approach to characterize C. elegans' phenotypes that does not require the definition of nematode-specific features. Rather, we make use of the popular computer vision Scale-Invariant Feature Transform (SIFT from which we construct histograms of commonly-observed SIFT features to represent nematode motility. We first evaluated our method on a synthetic dataset simulating a range of nematode crawling gaits. Next, we evaluated our algorithm on two distinct datasets of crawling C. elegans with mutants affecting neuromuscular structure and function. Not only is our algorithm able to detect differences between strains, results capture similarities in locomotory phenotypes that lead to clustering that is consistent with expectations based on genetic relationships. Our proposed approach generalizes directly and should be applicable to other animal models. Such applicability holds promise for computational ethology as more groups collect high-resolution image data of animal behavior.

  7. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  8. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  9. The Key Lake project

    International Nuclear Information System (INIS)

    1991-01-01

    Key Lake is located in the Athabasca sand stone basin, 640 kilometers north of Saskatoon, Saskatchewan, Canada. The three sources of ore at Key Lake contain 70 100 tonnes of uranium. Features of the Key Lake Project were described under the key headings: work force, mining, mill process, tailings storage, permanent camp, environmental features, worker health and safety, and economic benefits. Appendices covering the historical background, construction projects, comparisons of western world mines, mining statistics, Northern Saskatchewan surface lease, and Key Lake development and regulatory agencies were included

  10. From conceptual model to remediation: bioavailability, a key to clean up heavy metal contaminated soils.

    Science.gov (United States)

    Petruzzelli, Gianniantonio; Pedron, Francesca; Pezzarossa, Beatrice

    2013-04-01

    that aim to increase the bioavailability of pollutants are used in technologies which remove or destroy the solubilized contaminants. These procedures can increase mass transfer from the absorbed phase by means of sieving in order to decrease the diffusion processes (soil washing), by increasing the temperature (low temperature thermal desorption), or through the addition of chemical additives, such as chelating agents (Phytoextraction Elektrokinetic remediation). Concluding remarks Bioavailability should be a key component of the exposure evaluation in order to develop the conceptual model and to select the technology, in particular when: • only some chemical forms of contaminants are a source of risk for the site; • default assumptions regarding bioavailability are not suitable because of the site's specific characteristics; • the final destination of the site will not be modified at least in the near future.

  11. Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico

    Science.gov (United States)

    Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.

    2003-01-01

    Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire

  12. Prediction of hot spots in protein interfaces using a random forest model with hybrid features.

    Science.gov (United States)

    Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan

    2012-03-01

    Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot.

  13. Features of microscopic pedestrian movement in a panic situation based on cellular automata model

    Science.gov (United States)

    Ibrahim, Najihah; Hassan, Fadratul Hafinaz

    2017-10-01

    Pedestrian movement is the one of the subset for the crowd management under simulation objective. During panic situation, pedestrian usually will create a microscopic movement that lead towards the self-organization. During self-organizing, the behavioral and physical factors had caused the mass effect on the pedestrian movement. The basic CA model will create a movement path for each pedestrian over a time step. However, due to the factors immerge, the CA model needs some enhancement that will establish a real simulation state. Hence, this concept paper will discuss on the enhanced features of CA model for microscopic pedestrian movement during panic situation for a better pedestrian simulation.

  14. Dome effect of black carbon and its key influencing factors: a one-dimensional modelling study

    Science.gov (United States)

    Wang, Zilin; Huang, Xin; Ding, Aijun

    2018-02-01

    Black carbon (BC) has been identified to play a critical role in aerosol-planetary boundary layer (PBL) interaction and further deterioration of near-surface air pollution in megacities, which has been referred to as the dome effect. However, the impacts of key factors that influence this effect, such as the vertical distribution and aging processes of BC, as well as the underlying land surface, have not been quantitatively explored yet. Here, based on available in situ measurements of meteorology and atmospheric aerosols together with the meteorology-chemistry online coupled model WRF-Chem, we conduct a set of parallel simulations to quantify the roles of these factors in influencing the BC dome effect and surface haze pollution. Furthermore, we discuss the main implications of the results to air pollution mitigation in China. We found that the impact of BC on the PBL is very sensitive to the altitude of aerosol layer. The upper-level BC, especially that near the capping inversion, is more essential in suppressing the PBL height and weakening the turbulent mixing. The dome effect of BC tends to be significantly intensified as BC mixed with scattering aerosols during winter haze events, resulting in a decrease in PBL height by more than 15 %. In addition, the dome effect is more substantial (up to 15 %) in rural areas than that in the urban areas with the same BC loading, indicating an unexpected regional impact of such an effect to air quality in countryside. This study indicates that China's regional air pollution would greatly benefit from BC emission reductions, especially those from elevated sources from chimneys and also domestic combustion in rural areas, through weakening the aerosol-boundary layer interactions that are triggered by BC.

  15. Solid images for geostructural mapping and key block modeling of rock discontinuities

    Science.gov (United States)

    Assali, Pierre; Grussenmeyer, Pierre; Villemin, Thierry; Pollet, Nicolas; Viguier, Flavien

    2016-04-01

    Rock mass characterization is obviously a key element in rock fall hazard analysis. Managing risk and determining the most adapted reinforcement method require a proper understanding of the considered rock mass. Description of discontinuity sets is therefore a crucial first step in the reinforcement work design process. The on-field survey is then followed by a structural modeling in order to extrapolate the data collected at the rock surface to the inner part of the massif. Traditional compass survey and manual observations can be undoubtedly surpassed by dense 3D data such as LiDAR or photogrammetric point clouds. However, although the acquisition phase is quite fast and highly automated, managing, handling and exploiting such great amount of collected data is an arduous task and especially for non specialist users. In this study, we propose a combined approached using both 3D point clouds (from LiDAR or image matching) and 2D digital images, gathered into the concept of ''solid image''. This product is the connection between the advantages of classical true colors 2D digital images, accessibility and interpretability, and the particular strengths of dense 3D point clouds, i.e. geometrical completeness and accuracy. The solid image can be considered as the information support for carrying-out a digital survey at the surface of the outcrop without being affected by traditional deficiencies (lack of data and sampling difficulties due to inaccessible areas, safety risk in steep sectors, etc.). Computational tools presented in this paper have been implemented into one standalone software through a graphical user interface helping operators with the completion of a digital geostructural survey and analysis. 3D coordinates extraction, 3D distances and area measurement, planar best-fit for discontinuity orientation, directional roughness profiles, block size estimation, and other tools have been experimented on a calcareous quarry in the French Alps.

  16. Features that contribute to the usefulness of low-fidelity models for surgical skills training

    DEFF Research Database (Denmark)

    Langebæk, Rikke; Berendt, Mette; Pedersen, Lene Tanggaard

    2012-01-01

    of models were developed to be used in a basic surgical skills course for veterinary students. The models were low fidelity, having limited resemblance to real animals. The aim of the present study was to describe the students' learning experience with the models and to report their perception...... of the usefulness of the models in applying the trained skills to live animal surgery. One hundred and forty-six veterinary fourth-year students evaluated the models on a four-point Likert scale. Of these, 26 additionally participated in individual semistructured interviews. The survey results showed that 75 per...... educational tools in preparation for live animal surgery. However, there are specific features to take into account when developing models in order for students to perceive them as useful....

  17. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  18. Laboratory infrastructure driven key performance indicator development using the smart grid architecture model

    DEFF Research Database (Denmark)

    Syed, Mazheruddin H.; Guillo-Sansano, Efren; Blair, Steven M.

    2017-01-01

    This study presents a methodology for collaboratively designing laboratory experiments and developing key performance indicators for the testing and validation of novel power system control architectures in multiple laboratory environments. The contribution makes use of the smart grid architecture...

  19. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  20. Hum-mPLoc 3.0: prediction enhancement of human protein subcellular localization through modeling the hidden correlations of gene ontology and functional domain features.

    Science.gov (United States)

    Zhou, Hang; Yang, Yang; Shen, Hong-Bin

    2017-03-15

    Protein subcellular localization prediction has been an important research topic in computational biology over the last decade. Various automatic methods have been proposed to predict locations for large scale protein datasets, where statistical machine learning algorithms are widely used for model construction. A key step in these predictors is encoding the amino acid sequences into feature vectors. Many studies have shown that features extracted from biological domains, such as gene ontology and functional domains, can be very useful for improving the prediction accuracy. However, domain knowledge usually results in redundant features and high-dimensional feature spaces, which may degenerate the performance of machine learning models. In this paper, we propose a new amino acid sequence-based human protein subcellular location prediction approach Hum-mPLoc 3.0, which covers 12 human subcellular localizations. The sequences are represented by multi-view complementary features, i.e. context vocabulary annotation-based gene ontology (GO) terms, peptide-based functional domains, and residue-based statistical features. To systematically reflect the structural hierarchy of the domain knowledge bases, we propose a novel feature representation protocol denoted as HCM (Hidden Correlation Modeling), which will create more compact and discriminative feature vectors by modeling the hidden correlations between annotation terms. Experimental results on four benchmark datasets show that HCM improves prediction accuracy by 5-11% and F 1 by 8-19% compared with conventional GO-based methods. A large-scale application of Hum-mPLoc 3.0 on the whole human proteome reveals proteins co-localization preferences in the cell. www.csbio.sjtu.edu.cn/bioinf/Hum-mPLoc3/. hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  1. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    Science.gov (United States)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  2. AUTOMATED FEATURE BASED TLS DATA REGISTRATION FOR 3D BUILDING MODELING

    OpenAIRE

    K. Kitamura; N. Kochi; S. Kaneko

    2012-01-01

    In this paper we present a novel method for the registration of point cloud data obtained using terrestrial laser scanner (TLS). The final goal of our investigation is the automated reconstruction of CAD drawings and the 3D modeling of objects surveyed by TLS. Because objects are scanned from multiple positions, individual point cloud need to be registered to the same coordinate system. We propose in this paper an automated feature based registration procedure. Our proposed method does not re...

  3. A model of how features of construction projects influence accident occurrence

    OpenAIRE

    Manu, P.

    2017-01-01

    This book chapter in "Valuing People in Construction" (edited by Emuze, F. and Smallwood, J.) presents a study which sought empirical verification of a model of how construction project features (CPFs) influence accident occurrence. A qualitative strategy, in particular phenomenology, involving a range of in-depth interviews with practitioners was used and the findings were subsequently validated using a credibility check involving a survey. Altogether, the findings of the interviews and cred...

  4. Assessing impact of changes in human resources features on enterprise activities: simulation model

    Directory of Open Access Journals (Sweden)

    Kalmykova Svetlana

    2017-01-01

    Full Text Available The need for creating programs of human resources development is shown; the impact of these programs on organizational effectiveness is taken into account. The stages of development tools and HRD programs on the basis of cognitive modelling are disclosed; these stages will help assess the impact of HR-practices on the key indicators of organization activity at the design stage. The method of HR-practices’ pre-selection in professional development of the employees is represented.

  5. Elysium region, mars: Tests of lithospheric loading models for the formation of tectonic features

    International Nuclear Information System (INIS)

    Hall, J.L.; Solomon, S.C.; Head, J.W.

    1986-01-01

    The second largest volcanic province on Mars lies in the Elysium region. Like the larger Tharsis province, Elysium is marked by a topographic rise and a broad free air gravity anomaly and also exhibits a complex assortment of tectonic and volcanic features. We test the hypothesis that the tectonic features in the Elysium region are the product of stresses produced by loading of the Martian lithosphere. We consider loading at three different scales: local loading by individual volcanoes, regional loading of the lithosphere from above or below, and quasi-global loading by Tharsis. A comparison of flexural stresses with lithospheric strength and with the inferred maximum depth of faulting confirms that concentric graben around Elysium Mons can be explained as resulting from local flexure of an elastic lithosphere about 50 km thick in response to the volcano load. Volcanic loading on a regional scale, however, leads to predicted stresses inconsistent with all observed tectonic features, suggesting that loading by widespread emplacement of thick plains deposits was not an important factor in the tectonic evolution of the Elysium region. A number of linear extensional features oriented generally NW-SE may have been the result of flexural uplift of the lithosphere on the scale of the Elysium rise. The global stress field associated with the support of the Tharsis rise appears to have influenced the development of many of the tectonic features in the Elysium region, including Cerberus Rupes and the systems of ridges in eastern and western Elysium. The comparisons of stress models for Elysium with the preserved tectonic features support a succession of stress fields operating at different times in the region

  6. Silver nanoparticles as a key feature of a plasma polymer composite layer in mitigation of charge injection into polyethylene under dc stress

    International Nuclear Information System (INIS)

    Milliere, L; Makasheva, K; Laurent, C; Despax, B; Boudou, L; Teyssedre, G

    2016-01-01

    The aim of this work is to limit charge injection from a semi-conducting electrode into low density polyethylene (LDPE) under dc field by tailoring the polymer surface using a silver nanoparticles-containing layer. The layer is composed of a plane of silver nanoparticles embedded in a semi-insulating organosilicon matrix deposited on the polyethylene surface by a plasma process. Size, density and surface coverage of the nanoparticles are controlled through the plasma process. Space charge distribution in 300 μm thick LDPE samples is measured by the pulsed-electroacoustic technique following a short term (step-wise voltage increase up to 50 kV mm −1 , 20 min in duration each, followed by a polarity inversion) and a longer term (up to 12 h under 40 kV mm −1 ) protocols for voltage application. A comparative study of space charge distribution between a reference polyethylene sample and the tailored samples is presented. It is shown that the barrier effect depends on the size distribution and the surface area covered by the nanoparticles: 15 nm (average size) silver nanoparticles with a high surface density but still not percolating form an efficient barrier layer that suppress charge injection. It is worthy to note that charge injection is detected for samples tailored with (i) percolating nanoparticles embedded in organosilicon layer; (ii) with organosilicon layer only, without nanoparticles and (iii) with smaller size silver particles (<10 nm) embedded in organosilicon layer. The amount of injected charges in the tailored samples increases gradually in the samples ranking given above. The mechanism of charge injection mitigation is discussed on the basis of complementary experiments carried out on the nanocomposite layer such as surface potential measurements. The ability of silver clusters to stabilize electrical charges close to the electrode thereby counterbalancing the applied field appears to be a key factor in explaining the charge injection

  7. Ceramic coatings: A phenomenological modeling for damping behavior related to microstructural features

    International Nuclear Information System (INIS)

    Tassini, N.; Patsias, S.; Lambrinou, K.

    2006-01-01

    Recent research has shown that both stiffness and damping of ceramic coatings exhibit different non-linearities. These properties strongly depend on the microstructure, which is characterized by heterogeneous sets of elastic elements with mesoscopic sizes and shapes, as in non-linear mesoscopic elastic materials. To predict the damping properties of this class of materials, we have implemented a phenomenological model that characterizes their elastic properties. The model is capable of reproducing the basic features of the observed damping behavior for zirconia coatings prepared by air plasma spraying and electron-beam physical-vapor-deposition

  8. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Analyzing surface features on icy satellites using a new two-layer analogue model

    Science.gov (United States)

    Morales, K. M.; Leonard, E. J.; Pappalardo, R. T.; Yin, A.

    2017-12-01

    The appearance of similar surface morphologies across many icy satellites suggests potentially unified formation mechanisms. Constraining the processes that shape the surfaces of these icy worlds is fundamental to understanding their rheology and thermal evolution—factors that have implications for potential habitability. Analogue models have proven useful for investigating and quantifying surface structure formation on Earth, but have only been sparsely applied to icy bodies. In this study, we employ an innovative two-layer analogue model that simulates a warm, ductile ice layer overlain by brittle surface ice on satellites such as Europa and Enceladus. The top, brittle layer is composed of fine-grained sand while the ductile, lower viscosity layer is made of putty. These materials were chosen because they scale up reasonably to the conditions on Europa and Enceladus. Using this analogue model, we investigate the role of the ductile layer in forming contractional structures (e.g. folds) that would compensate for the over-abundance of extensional features observed on icy satellites. We do this by simulating different compressional scenarios in the analogue model and analyzing whether the resulting features resemble those on icy bodies. If the resulting structures are similar, then the model can be used to quantify the deformation by calculating strain. These values can then be scaled up to Europa or Enceladus and used to quantity the observed surface morphologies and the amount of extensional strain accommodated by certain features. This presentation will focus on the resulting surface morphologies and the calculated strain values from several analogue experiments. The methods and findings from this work can then be expanded and used to study other icy bodies, such as Triton, Miranda, Ariel, and Pluto.

  10. Representation of the Kolmogorov model having all distinguishing features of quantum probabilistic model

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2003-01-01

    The contextual approach to the Kolmogorov probability model gives the possibility to represent this conventional model as a quantum structure, i.e., by using complex amplitudes of probabilities (or in the abstract approach - in a Hilbert space). Classical (Kolmogorovian) random variables are represented by in general noncommutative operators in the Hilbert space. The existence of such a contextual representation of the Kolmogorovian model looks very surprising in the view of the orthodox quantum tradition. However, our model can peacefully coexist with various 'no-go' theorems (e.g., von Neumann, Kochen and Specker, Bell, ...)

  11. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  12. An Approach for Automatically Deriving Key Performance Indicators from Ontological Enterprise Models

    NARCIS (Netherlands)

    Aksu, U.A.; Schunselaar, D.M.M.; Reijers, H.A.

    2017-01-01

    Organizations use Key Performance Indicators (KPIs) to monitor whether they attain their goals. Software vendors that supply generic software provide predefined KPIs in their software products for these organizations. However, each organization wants KPIs to be tailored to its specific goals.Th

  13. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    Science.gov (United States)

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  14. Simulation on scattering features of biological tissue based on generated refractive-index model

    International Nuclear Information System (INIS)

    Wang Baoyong; Ding Zhihua

    2011-01-01

    Important information on morphology of biological tissue can be deduced from elastic scattering spectra, and their analyses are based on the known refractive-index model of tissue. In this paper, a new numerical refractive-index model is put forward, and its scattering properties are intensively studied. Spectral decomposition [1] is a widely used method to generate random medium in geology, but it is never used in biology. Biological tissue is different from geology in the sense of random medium. Autocorrelation function describe almost all of features in geology, but biological tissue is not as random as geology, its structure is regular in the sense of fractal geometry [2] , and fractal dimension can be used to describe its regularity under random. Firstly scattering theories of this fractal media are reviewed. Secondly the detailed generation process of refractive-index is presented. Finally the scattering features are simulated in FDTD (Finite Difference Time Domain) Solutions software. From the simulation results, we find that autocorrelation length and fractal dimension controls scattering feature of biological tissue.

  15. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  16. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  17. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  18. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    International Nuclear Information System (INIS)

    Biral, Elias J P; Tilles, Paulo F C; Fontanari, José F

    2015-01-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains. (paper)

  19. RELAP5-3D Code Includes ATHENA Features and Models

    International Nuclear Information System (INIS)

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-01-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF 6 , xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  20. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  1. Characterization of reproductive, metabolic, and endocrine features of polycystic ovary syndrome in female hyperandrogenic mouse models.

    Science.gov (United States)

    Caldwell, A S L; Middleton, L J; Jimenez, M; Desai, R; McMahon, A C; Allan, C M; Handelsman, D J; Walters, K A

    2014-08-01

    Polycystic ovary syndrome (PCOS) affects 5-10% of women of reproductive age, causing a range of reproductive, metabolic and endocrine defects including anovulation, infertility, hyperandrogenism, obesity, hyperinsulinism, and an increased risk of type 2 diabetes and cardiovascular disease. Hyperandrogenism is the most consistent feature of PCOS, but its etiology remains unknown, and ethical and logistic constraints limit definitive experimentation in humans to determine mechanisms involved. In this study, we provide the first comprehensive characterization of reproductive, endocrine, and metabolic PCOS traits in 4 distinct murine models of hyperandrogenism, comprising prenatal dihydrotestosterone (DHT, potent nonaromatizable androgen) treatment during days 16-18 of gestation, or long-term treatment (90 days from 21 days of age) with DHT, dehydroepiandrosterone (DHEA), or letrozole (aromatase inhibitor). Prenatal DHT-treated mature mice exhibited irregular estrous cycles, oligo-ovulation, reduced preantral follicle health, hepatic steatosis, and adipocyte hypertrophy, but lacked overall changes in body-fat composition. Long-term DHT treatment induced polycystic ovaries displaying unhealthy antral follicles (degenerate oocyte and/or > 10% pyknotic granulosa cells), as well as anovulation and acyclicity in mature (16-week-old) females. Long-term DHT also increased body and fat pad weights and induced adipocyte hypertrophy and hypercholesterolemia. Long-term letrozole-treated mice exhibited absent or irregular cycles, oligo-ovulation, polycystic ovaries containing hemorrhagic cysts atypical of PCOS, and displayed no metabolic features of PCOS. Long-term dehydroepiandrosterone treatment produced no PCOS features in mature mice. Our findings reveal that long-term DHT treatment replicated a breadth of ovarian, endocrine, and metabolic features of human PCOS and provides the best mouse model for experimental studies of PCOS pathogenesis.

  2. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  3. CONSTRUCTION OF MECHANICAL MODEL OF THE DIESEL-TRAIN DTKR-2 CAR AND ITS FEATURES

    Directory of Open Access Journals (Sweden)

    A. Y. Kuzyshyn

    2017-12-01

    Full Text Available Purpose.The article is aimed to construct the mechanical model of the diesel train DTKr-2 of the Kryukivsk Railway Car Building Works based on the analysis of undercarriage construction. This model will be used in the study of dynamic properties of the vehicle. When constructing the model the design features and its loading methods should be displayed as much as possible. Methodology. When constructing the mechanical model of the diesel train DTKr-2 car, the pneumatic spring, which is the main element of the central spring suspension, was modeled using Kelvin-Voigt node. This node includes elastic and viscous element. Hydraulic shock absorbers that are used both in the central and axle-box spring suspension were modeled as a viscous element. During research, the rigidity of the pneumatic spring, which is associated with the change in its effective area under deformation, was assumed to be zero. Findings. This article analyzed the design of car undercarriage of the diesel train DTKr-2. The mathematical models of its main units were presented, namely, in the central spring suspension – the model of pneumatic spring. Taking into account the peculiarities of design of the diesel train DTKr-2 undercarriage it was developed its mechanical model, which will be used in the future when studying dynamic properties. Originality.For the first time for the diesel train DTKr-2 car it was developed its mechanical model taking into account the features of the interaction of individual elements of its design. It has been proposed as a pneumatic spring to use the Kelvin-Voigt node, which includes parallel arranged elastic and viscous elements. Practical value. On the basis of the proposed mechanical model, a system of ordinary differential equations of car undercarriage movement of the diesel train DTKr-2 (mathematical model will be compiled. This model is further planned to be used when studying dynamic interaction of the diesel train car undercarriage wheel

  4. Key features of palliative care service delivery to Indigenous peoples in Australia, New Zealand, Canada and the United States: a comprehensive review.

    Science.gov (United States)

    Shahid, Shaouli; Taylor, Emma V; Cheetham, Shelley; Woods, John A; Aoun, Samar M; Thompson, Sandra C

    2018-05-08

    Indigenous peoples in developed countries have reduced life expectancies, particularly from chronic diseases. The lack of access to and take up of palliative care services of Indigenous peoples is an ongoing concern. To examine and learn from published studies on provision of culturally safe palliative care service delivery to Indigenous people in Australia, New Zealand (NZ), Canada and the United States of America (USA); and to compare Indigenous peoples' preferences, needs, opportunities and barriers to palliative care. A comprehensive search of multiple databases was undertaken. Articles were included if they were published in English from 2000 onwards and related to palliative care service delivery for Indigenous populations; papers could use quantitative or qualitative approaches. Common themes were identified using thematic synthesis. Studies were evaluated using Daly's hierarchy of evidence-for-practice in qualitative research. Of 522 articles screened, 39 were eligible for inclusion. Despite diversity in Indigenous peoples' experiences across countries, some commonalities were noted in the preferences for palliative care of Indigenous people: to die close to or at home; involvement of family; and the integration of cultural practices. Barriers identified included inaccessibility, affordability, lack of awareness of services, perceptions of palliative care, and inappropriate services. Identified models attempted to address these gaps by adopting the following strategies: community engagement and ownership; flexibility in approach; continuing education and training; a whole-of-service approach; and local partnerships among multiple agencies. Better engagement with Indigenous clients, an increase in number of palliative care patients, improved outcomes, and understanding about palliative care by patients and their families were identified as positive achievements. The results provide a comprehensive overview of identified effective practices with regards to

  5. Towards a Unified Business Model Vocabulary: A Proposition of Key Constructs

    OpenAIRE

    Mettler, Tobias

    2014-01-01

    The design of business models is of decisive importance and as such it has been a major research theme in service and particularly electronic markets. Today, different definitions of the term and ideas of core constructs of business models exist. In this paper we present a unified vocabulary for business models that builds upon the elementary perception of three existing, yet very dissimilar ontologies for modeling the essence of a business. The resulting unified business model vocabulary not...

  6. Passage Key Inlet, Florida; CMS Modeling and Borrow Site Impact Analysis

    Science.gov (United States)

    2016-06-01

    Impact Analysis by Kelly R. Legault and Sirisha Rayaprolu PURPOSE: This Coastal and Hydraulics Engineering Technical Note (CHETN) describes the...driven sediment transport at Passage Key Inlet. This analysis resulted in issuing a new Florida Department of Environmental Protection (FDEP) permit to...Funding for this study was provided by the USACE Regional Sediment Management (RSM) Program, a Navigation Research, Development, and Technology Portfolio

  7. Quantum key management

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Richard John; Thrasher, James Thomas; Nordholt, Jane Elizabeth

    2016-11-29

    Innovations for quantum key management harness quantum communications to form a cryptography system within a public key infrastructure framework. In example implementations, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a Merkle signature scheme (using Winternitz one-time digital signatures or other one-time digital signatures, and Merkle hash trees) to constitute a cryptography system. More generally, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a hash-based signature scheme. This provides a secure way to identify, authenticate, verify, and exchange secret cryptographic keys. Features of the quantum key management innovations further include secure enrollment of users with a registration authority, as well as credential checking and revocation with a certificate authority, where the registration authority and/or certificate authority can be part of the same system as a trusted authority for quantum key distribution.

  8. Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-04-10

    In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume

  9. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    Science.gov (United States)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  10. A framework for treating DSM-5 alternative model for personality disorder features.

    Science.gov (United States)

    Hopwood, Christopher J

    2018-04-15

    Despite its demonstrated empirical superiority over the DSM-5 Section 2 categorical model of personality disorders for organizing the features of personality pathology, limitations remain with regard to the translation of the DSM-5 Section 3 alternative model of personality disorders (AMPD) to clinical practice. The goal of this paper is to outline a general and preliminary framework for approaching treatment from the perspective of the AMPD. Specific techniques are discussed for the assessment and treatment of both Criterion A personality dysfunction and Criterion B maladaptive traits. A concise and step-by-step model is presented for clinical decision making with the AMPD, in the hopes of offering clinicians a framework for treating personality pathology and promoting further research on the clinical utility of the AMPD. Copyright © 2018 John Wiley & Sons, Ltd. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  12. Thermodynamic model of social influence on two-dimensional square lattice: Case for two features

    Science.gov (United States)

    Genzor, Jozef; Bužek, Vladimír; Gendiar, Andrej

    2015-02-01

    We propose a thermodynamic multi-state spin model in order to describe equilibrial behavior of a society. Our model is inspired by the Axelrod model used in social network studies. In the framework of the statistical mechanics language, we analyze phase transitions of our model, in which the spin interaction J is interpreted as a mutual communication among individuals forming a society. The thermal fluctuations introduce a noise T into the communication, which suppresses long-range correlations. Below a certain phase transition point Tt, large-scale clusters of the individuals, who share a specific dominant property, are formed. The measure of the cluster sizes is an order parameter after spontaneous symmetry breaking. By means of the Corner transfer matrix renormalization group algorithm, we treat our model in the thermodynamic limit and classify the phase transitions with respect to inherent degrees of freedom. Each individual is chosen to possess two independent features f = 2 and each feature can assume one of q traits (e.g. interests). Hence, each individual is described by q2 degrees of freedom. A single first-order phase transition is detected in our model if q > 2, whereas two distinct continuous phase transitions are found if q = 2 only. Evaluating the free energy, order parameters, specific heat, and the entanglement von Neumann entropy, we classify the phase transitions Tt(q) in detail. The permanent existence of the ordered phase (the large-scale cluster formation with a non-zero order parameter) is conjectured below a non-zero transition point Tt(q) ≈ 0.5 in the asymptotic regime q → ∞.

  13. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  14. Multiscale Feature Model for Terrain Data Based on Adaptive Spatial Neighborhood

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available Multiresolution hierarchy based on features (FMRH has been applied in the field of terrain modeling and obtained significant results in real engineering. However, it is difficult to schedule multiresolution data in FMRH from external memory. This paper proposed new multiscale feature model and related strategies to cluster spatial data blocks and solve the scheduling problems of FMRH using spatial neighborhood. In the model, the nodes with similar error in the different layers should be in one cluster. On this basis, a space index algorithm for each cluster guided by Hilbert curve is proposed. It ensures that multi-resolution terrain data can be loaded without traversing the whole FMRH; therefore, the efficiency of data scheduling is improved. Moreover, a spatial closeness theorem of cluster is put forward and is also proved. It guarantees that the union of data blocks composites a whole terrain without any data loss. Finally, experiments have been carried out on many different large scale data sets, and the results demonstrate that the schedule time is shortened and the efficiency of I/O operation is apparently improved, which is important in real engineering.

  15. A food recognition system for diabetic patients based on an optimized bag-of-features model.

    Science.gov (United States)

    Anthimopoulos, Marios M; Gianola, Lauro; Scarnato, Luca; Diem, Peter; Mougiakakou, Stavroula G

    2014-07-01

    Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

  16. Stargardt disease: clinical features, molecular genetics, animal models and therapeutic options

    Science.gov (United States)

    Tanna, Preena; Strauss, Rupert W; Fujinami, Kaoru; Michaelides, Michel

    2017-01-01

    Stargardt disease (STGD1; MIM 248200) is the most prevalent inherited macular dystrophy and is associated with disease-causing sequence variants in the gene ABCA4. Significant advances have been made over the last 10 years in our understanding of both the clinical and molecular features of STGD1, and also the underlying pathophysiology, which has culminated in ongoing and planned human clinical trials of novel therapies. The aims of this review are to describe the detailed phenotypic and genotypic characteristics of the disease, conventional and novel imaging findings, current knowledge of animal models and pathogenesis, and the multiple avenues of intervention being explored. PMID:27491360

  17. Investigation of the blockchain systems’ scalability features using the agent based modelling

    OpenAIRE

    Šulnius, Aleksas

    2017-01-01

    Investigation of the BlockChain Systems’ Scalability Features using the Agent Based Modelling. BlockChain currently is in the spotlight of all the FinTech industry. This technology is being called revolutionary, ground breaking, disruptive and even the WEB 3.0. On the other hand it is widely agreed that the BlockChain is in its early stages of development. In its current state BlockChain is in similar position that the Internet was in the early nineties. In order for this technology to gain m...

  18. Design Models as Emergent Features: An Empirical Study in Communication and Shared Mental Models in Instructional

    Science.gov (United States)

    Botturi, Luca

    2006-01-01

    This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-­learning unit. The teams declared they were using the same fast-­prototyping design and development model, and were composed of the same roles (although with a different number of SMEs).…

  19. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  20. An overview of mice models: a key for understanding subtypes of mania

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Cuartas Arias

    2016-09-01

    Full Text Available Animal models have been broadly used in the study of pathophysiology and molecular and neurochemical pathways in neuropsychiatric diseases. Different approaches have used both consanguineous and non-consanguineous mice models to model behavioral patterns associated with the maniac spectrum. However, the disadvantages of validating clinical and experimental protocols have hindered the replication of these studies. In this article, the advantages and disadvantages of using consanguineous lines and non-consanguineous stocks in mice animal models for the study of mania and its subtypes are discussed. Additionally, new experimental alternatives to advance the pathogenesis and pharmacogenetics of mania using animal models are proposed and analyzed.

  1. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    Science.gov (United States)

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  2. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  3. What are the key drivers of MAC curves? A partial-equilibrium modelling approach for the UK

    International Nuclear Information System (INIS)

    Kesicki, Fabian

    2013-01-01

    Marginal abatement cost (MAC) curves are widely used for the assessment of costs related to CO 2 emissions reduction in environmental economics, as well as domestic and international climate policy. Several meta-analyses and model comparisons have previously been performed that aim to identify the causes for the wide range of MAC curves. Most of these concentrate on general equilibrium models with a focus on aspects such as specific model type and technology learning, while other important aspects remain almost unconsidered, including the availability of abatement technologies and level of discount rates. This paper addresses the influence of several key parameters on MAC curves for the United Kingdom and the year 2030. A technology-rich energy system model, UK MARKAL, is used to derive the MAC curves. The results of this study show that MAC curves are robust even to extreme fossil fuel price changes, while uncertainty around the choice of the discount rate, the availability of key abatement technologies and the demand level were singled out as the most important influencing factors. By using a different model type and studying a wider range of influencing factors, this paper contributes to the debate on the sensitivity of MAC curves. - Highlights: ► A partial-equilibrium model is employed to test key sensitivities of MAC curves. ► MAC curves are found to be robust to wide-ranging changes in fossil fuel prices. ► Most influencing factors are the discount rate, availability of key technologies. ► Further important uncertainty in MAC curves is related to demand changes

  4. Application of a three-feature dispersed-barrier hardening model to neutron-irradiated Fe-Cr model alloys

    Science.gov (United States)

    Bergner, F.; Pareige, C.; Hernández-Mayoral, M.; Malerba, L.; Heintze, C.

    2014-05-01

    An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe-Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α‧-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys.

  5. Scaling up spike-and-slab models for unsupervised feature learning.

    Science.gov (United States)

    Goodfellow, Ian J; Courville, Aaron; Bengio, Yoshua

    2013-08-01

    We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.

  6. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  7. Features of genomic organization in a nucleotide-resolution molecular model of the Escherichia coli chromosome.

    Science.gov (United States)

    Hacker, William C; Li, Shuxiang; Elcock, Adrian H

    2017-07-27

    We describe structural models of the Escherichia coli chromosome in which the positions of all 4.6 million nucleotides of each DNA strand are resolved. Models consistent with two basic chromosomal orientations, differing in their positioning of the origin of replication, have been constructed. In both types of model, the chromosome is partitioned into plectoneme-abundant and plectoneme-free regions, with plectoneme lengths and branching patterns matching experimental distributions, and with spatial distributions of highly-transcribed chromosomal regions matching recent experimental measurements of the distribution of RNA polymerases. Physical analysis of the models indicates that the effective persistence length of the DNA and relative contributions of twist and writhe to the chromosome's negative supercoiling are in good correspondence with experimental estimates. The models exhibit characteristics similar to those of 'fractal globules,' and even the most genomically-distant parts of the chromosome can be physically connected, through paths combining linear diffusion and inter-segmental transfer, by an average of only ∼10 000 bp. Finally, macrodomain structures and the spatial distributions of co-expressed genes are analyzed: the latter are shown to depend strongly on the overall orientation of the chromosome. We anticipate that the models will prove useful in exploring other static and dynamic features of the bacterial chromosome. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Development of the information model for consumer assessment of key quality indicators by goods labelling

    Science.gov (United States)

    Koshkina, S.; Ostrinskaya, L.

    2018-04-01

    An information model for “key” quality indicators of goods has been developed. This model is based on the assessment of f standardization existing state and the product labeling quality. According to the authors’ opinion, the proposed “key” indicators are the most significant for purchasing decision making. Customers will be able to use this model through their mobile technical devices. The developed model allows to decompose existing processes in data flows and to reveal the levels of possible architectural solutions. In-depth analysis of the presented information model decomposition levels will allow determining the stages of its improvement and to reveal additional indicators of the goods quality that are of interest to customers in the further research. Examining the architectural solutions for the customer’s information environment functioning when integrating existing databases will allow us to determine the boundaries of the model flexibility and customizability.

  9. Key Features in Knowledge-Driven Companies

    Directory of Open Access Journals (Sweden)

    Dana NICULESCU

    2015-03-01

    Full Text Available In today’s Romanian business environment, an increasing attention should be laid on clarifying the organizational strategy, structures and systems within the corporate governance, and moreover a critical eye must be focused on the organizational culture, leadership, and alignment to ethical values. Organizational transparency makes sense rationally and ethically, therefore the business leaders and the top management must learn how to effectively build trust, communicate, and foster true values in a transparent manner. This paper proposes an introspective view over a Romanian entrepreneurial organization within the SME sector, where many and sudden changes at governance level took place in a relative short span of time, changes that influenced over time many critical aspects of human capital, including aspect of knowledge management. In our case, short-term gains prevailed over long-term interest and people were not prepared for these changes. By analyzing the effect of lack of transparency, communication and alignment over the employees’ engagement and involvement, the article draws on insights from different employees’ point of view, demonstrating that culture is capable of altering strategy and structure. The role of a leader of an organization is no longer simply to run the business on a daily basis, but to create the right culture based on ethics and values. It becomes essential to cultivate such a framework, based on leadership processes, toward achieving business objectives and related growth, profit, and return goals, outlining the future of corporate governance and what’s needed for continued effectiveness in a framework where knowledge is continually enhanced.

  10. Schizotypy: Key feature of Klinefelter's syndrome?

    NARCIS (Netherlands)

    Verhoeven, W.M.A.; Egger, J.I.M.

    2011-01-01

    Klinefelter's syndrome (KS; karyotype 47,XXY) is associated with specific neurocognitive impairments, especially delayed language development and impaired socioemotional evolution. There is an increased risk for psychiatric disturbances, particularly schizophrenia and affective spectrum disorders. A

  11. Upscaling key ecosystem functions across the conterminous United States by a water‐centric ecosystem model

    Science.gov (United States)

    Ge Sun; Peter Caldwell; Asko Noormets; Steven G. McNulty; Erika Cohen; al. et.

    2011-01-01

    We developed a water‐centric monthly scale simulation model (WaSSI‐C) by integrating empirical water and carbon flux measurements from the FLUXNET network and an existing water supply and demand accounting model (WaSSI). The WaSSI‐C model was evaluated with basin‐scale evapotranspiration (ET), gross ecosystem productivity (GEP), and net ecosystem exchange (NEE)...

  12. Research on oral test modeling based on multi-feature fusion

    Science.gov (United States)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  13. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    Science.gov (United States)

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms.

  14. Modeling the Formation of Giant Planet Cores I: Evaluating Key Processes

    OpenAIRE

    Levison, H. F.; Thommes, E.; Duncan, M. J.

    2009-01-01

    One of the most challenging problems we face in our understanding of planet formation is how Jupiter and Saturn could have formed before the the solar nebula dispersed. The most popular model of giant planet formation is the so-called 'core accretion' model. In this model a large planetary embryo formed first, mainly by two-body accretion. This is then followed by a period of inflow of nebular gas directly onto the growing planet. The core accretion model has an Achilles heel, namely the very...

  15. Feature-opinion pair identification of product reviews in Chinese: a domain ontology modeling method

    Science.gov (United States)

    Yin, Pei; Wang, Hongwei; Guo, Kaiqiang

    2013-03-01

    With the emergence of the new economy based on social media, a great amount of consumer feedback on particular products are conveyed through wide-spreading product online reviews, making opinion mining a growing interest for both academia and industry. According to the characteristic mode of expression in Chinese, this research proposes an ontology-based linguistic model to identify the basic appraisal expression in Chinese product reviews-"feature-opinion pair (FOP)." The product-oriented domain ontology is constructed automatically at first, then algorithms to identify FOP are designed by mapping product features and opinions to the conceptual space of the domain ontology, and finally comparative experiments are conducted to evaluate the model. Experimental results indicate that the performance of the proposed approach in this paper is efficient in obtaining a more accurate result compared to the state-of-art algorithms. Furthermore, through identifying and analyzing FOPs, the unstructured product reviews are converted into structured and machine-sensible expression, which provides valuable information for business application. This paper contributes to the related research in opinion mining by developing a solid foundation for further sentiment analysis at a fine-grained level and proposing a general way for automatic ontology construction.

  16. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  17. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  18. Estimating Fallout Building Attributes from Architectural Features and Global Earthquake Model (GEM) Building Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Staci R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-01

    A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. These methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.

  19. Controls on the spatial variability of key soil properties: comparing field data with a mechanistic soilscape evolution model

    Science.gov (United States)

    Vanwalleghem, T.; Román, A.; Giraldez, J. V.

    2016-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  20. Computational intelligence models to predict porosity of tablets using minimum features

    Directory of Open Access Journals (Sweden)

    Khalid MH

    2017-01-01

    behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%. Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. Keywords: computational intelligence, artificial neural network, symbolic regression, feature selection, die compaction, porosity

  1. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    Science.gov (United States)

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  2. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  3. Data warehouse model for monitoring key performance indicators (KPIs) using goal oriented approach

    Science.gov (United States)

    Abdullah, Mohammed Thajeel; Ta'a, Azman; Bakar, Muhamad Shahbani Abu

    2016-08-01

    The growth and development of universities, just as other organizations, depend on their abilities to strategically plan and implement development blueprints which are in line with their vision and mission statements. The actualizations of these statements, which are often designed into goals and sub-goals and linked to their respective actors are better measured by defining key performance indicators (KPIs) of the university. The proposes ReGADaK, which is an extended the GRAnD approach highlights the facts, dimensions, attributes, measures and KPIs of the organization. The measures from the goal analysis of this unit serve as the basis of developing the related university's KPIs. The proposed data warehouse schema is evaluated through expert review, prototyping and usability evaluation. The findings from the evaluation processes suggest that the proposed data warehouse schema is suitable for monitoring the University's KPIs.

  4. Impacts of Changing Climatic Drivers and Land use features on Future Stormwater Runoff in the Northwest Florida Basin: A Large-Scale Hydrologic Modeling Assessment

    Science.gov (United States)

    Khan, M.; Abdul-Aziz, O. I.

    2017-12-01

    Potential changes in climatic drivers and land cover features can significantly influence the stormwater budget in the Northwest Florida Basin. We investigated the hydro-climatic and land use sensitivities of stormwater runoff by developing a large-scale process-based rainfall-runoff model for the large basin by using the EPA Storm Water Management Model (SWMM 5.1). Climatic and hydrologic variables, as well as land use/cover features were incorporated into the model to account for the key processes of coastal hydrology and its dynamic interactions with groundwater and sea levels. We calibrated and validated the model by historical daily streamflow observations during 2009-2012 at four major rivers in the basin. Downscaled climatic drivers (precipitation, temperature, solar radiation) projected by twenty GCMs-RCMs under CMIP5, along with the projected future land use/cover features were also incorporated into the model. The basin storm runoff was then simulated for the historical (2000s = 1976-2005) and two future periods (2050s = 2030-2059, and 2080s = 2070-2099). Comparative evaluation of the historical and future scenarios leads to important guidelines for stormwater management in Northwest Florida and similar regions under a changing climate and environment.

  5. Evaluation of Features, Events, and Processes (FEP) for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek; P. Rogers

    2004-10-27

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of biosphere features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for the license application (LA). A screening decision, either ''Included'' or ''Excluded'', is given for each FEP along with the corresponding technical basis for the excluded FEPs and the descriptions of how the included FEPs were incorporated in the biosphere model. This information is required by the U.S. Nuclear Regulatory Commission (NRC) regulations at 10 CFR 63.114 (d, e, and f) [DIRS 156605]. The FEPs addressed in this report concern characteristics of the reference biosphere, the receptor, and the environmental transport and receptor exposure pathways for the groundwater and volcanic ash exposure scenarios considered in biosphere modeling. This revision provides the summary of the implementation of included FEPs in TSPA-LA, (i.e., how the FEP is included); for excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded). This report is one of the 10 documents constituting the biosphere model documentation suite. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling. The ''Biosphere Model Report'' describes in detail the biosphere conceptual model and mathematical model. The input parameter reports shown to the right of the ''Biosphere Model Report'' contain detailed descriptions of the model input parameters and their development. Outputs from these six reports are used in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis and Disruptive Event Biosphere Dose Conversion Factor Analysis

  6. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  7. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  8. Key West, Florida 1/3 Arc-second NAVD 88 Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  9. models of hourly dry bulb temperature and relative humidity of key

    African Journals Online (AJOL)

    user

    3: Worst cases of MFE for Dry bulb temperature and Relative humidity. Fig. 4: Best cases of ... the Second Joint International Conference of. University of Ilorin, Ilorin, Nigeria and University ... Erbs, D. G., “Models and Applications for Weather.

  10. Key West, Florida 1/3 Arc-second MHW Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  11. Turbulence modeling for flows around convex features giving rapid eddy distortion

    International Nuclear Information System (INIS)

    Tucker, P.G.; Liu, Y.

    2007-01-01

    Reynolds averaged Navier-Stokes model performances in the stagnation and wake regions for turbulent flows with relatively large Lagrangian length scales (generally larger than the scale of geometrical features) approaching small cylinders (both square and circular) is explored. The effective cylinder (or wire) diameter based Reynolds number, Re W ≤ 2.5 x 10 3 . The following turbulence models are considered: a mixing-length; standard Spalart and Allmaras (SA) and streamline curvature (and rotation) corrected SA (SARC); Secundov's ν t -92; Secundov et al.'s two equation ν t -L; Wolfshtein's k-l model; the Explicit Algebraic Stress Model (EASM) of Abid et al.; the cubic model of Craft et al.; various linear k-ε models including those with wall distance based damping functions; Menter SST, k-ω and Spalding's LVEL model. The use of differential equation distance functions (Poisson and Hamilton-Jacobi equation based) for palliative turbulence modeling purposes is explored. The performance of SA with these distance functions is also considered in the sharp convex geometry region of an airfoil trailing edge. For the cylinder, with Re W ∼ 2.5 x 10 3 the mixing length and k-l models give strong turbulence production in the wake region. However, in agreement with eddy viscosity estimates, the LVEL and Secundov ν t -92 models show relatively little cylinder influence on turbulence. On the other hand, two equation models (as does the one equation SA) suggest the cylinder gives a strong turbulence deficit in the wake region. Also, for SA, an order or magnitude cylinder diameter decrease from Re W = 2500 to 250 surprisingly strengthens the cylinder's disruptive influence. Importantly, results for Re W W = 250 i.e. no matter how small the cylinder/wire its influence does not, as it should, vanish. Similar tests for the Launder-Sharma k-ε, Menter SST and k-ω show, in accordance with physical reality, the cylinder's influence diminishing albeit slowly with size. Results

  12. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  13. Key Characteristics of Combined Accident including TLOFW accident for PSA Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bo Gyung; Kang, Hyun Gook [KAIST, Daejeon (Korea, Republic of); Yoon, Ho Joon [Khalifa University of Science, Technology and Research, Abu Dhabi (United Arab Emirates)

    2015-05-15

    The conventional PSA techniques cannot adequately evaluate all events. The conventional PSA models usually focus on single internal events such as DBAs, the external hazards such as fire, seismic. However, the Fukushima accident of Japan in 2011 reveals that very rare event is necessary to be considered in the PSA model to prevent the radioactive release to environment caused by poor treatment based on lack of the information, and to improve the emergency operation procedure. Especially, the results from PSA can be used to decision making for regulators. Moreover, designers can consider the weakness of plant safety based on the quantified results and understand accident sequence based on human actions and system availability. This study is for PSA modeling of combined accidents including total loss of feedwater (TLOFW) accident. The TLOFW accident is a representative accident involving the failure of cooling through secondary side. If the amount of heat transfer is not enough due to the failure of secondary side, the heat will be accumulated to the primary side by continuous core decay heat. Transients with loss of feedwater include total loss of feedwater accident, loss of condenser vacuum accident, and closure of all MSIVs. When residual heat removal by the secondary side is terminated, the safety injection into the RCS with direct primary depressurization would provide alternative heat removal. This operation is called feed and bleed (F and B) operation. Combined accidents including TLOFW accident are very rare event and partially considered in conventional PSA model. Since the necessity of F and B operation is related to plant conditions, the PSA modeling for combined accidents including TLOFW accident is necessary to identify the design and operational vulnerabilities.The PSA is significant to assess the risk of NPPs, and to identify the design and operational vulnerabilities. Even though the combined accident is very rare event, the consequence of combined

  14. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  15. Climatic features of the Red Sea from a regional assimilative model

    KAUST Repository

    Viswanadhapalli, Yesubabu

    2016-08-16

    The Advanced Research version of Weather Research and Forecasting (WRF-ARW) model was used to generate a downscaled, 10-km resolution regional climate dataset over the Red Sea and adjacent region. The model simulations are performed based on two, two-way nested domains of 30- and 10-km resolutions assimilating all conventional observations using a cyclic three-dimensional variational approach over an initial 12-h period. The improved initial conditions are then used to generate regional climate products for the following 24 h. We combined the resulting daily 24-h datasets to construct a 15-year Red Sea atmospheric downscaled product from 2000 to 2014. This 15-year downscaled dataset is evaluated via comparisons with various in situ and gridded datasets. Our analysis indicates that the assimilated model successfully reproduced the spatial and temporal variability of temperature, wind, rainfall, relative humidity and sea level pressure over the Red Sea region. The model also efficiently simulated the seasonal and monthly variability of wind patterns, the Red Sea Convergence Zone and associated rainfall. Our results suggest that dynamical downscaling and assimilation of available observations improve the representation of regional atmospheric features over the Red Sea compared to global analysis data from the National Centers for Environmental Prediction. We use the dataset to describe the atmospheric climatic conditions over the Red Sea region. © 2016 Royal Meteorological Society.

  16. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  17. CHARACTERISTIC FEATURES OF MUELLER MATRIX PATTERNS FOR POLARIZATION SCATTERING MODEL OF BIOLOGICAL TISSUES

    Directory of Open Access Journals (Sweden)

    E DU

    2014-01-01

    Full Text Available We developed a model to describe polarized photon scattering in biological tissues. In this model, tissues are simplified to a mixture of scatterers and surrounding medium. There are two types of scatterers in the model: solid spheres and infinitely long solid cylinders. Variables related to the scatterers include: the densities and sizes of the spheres and cylinders, the orientation and angular distribution of cylinders. Variables related to the surrounding medium include: the refractive index, absorption coefficient and birefringence. In this paper, as a development we introduce an optical activity effect to the model. By comparing experiments and Monte Carlo simulations, we analyze the backscattering Mueller matrix patterns of several tissue-like media, and summarize the different effects coming from anisotropic scattering and optical properties. In addition, we propose a possible method to extract the optical activity values for tissues. Both the experimental and simulated results show that, by analyzing the Mueller matrix patterns, the microstructure and optical properties of the medium can be obtained. The characteristic features of Mueller matrix patterns are potentially powerful tools for studying the contrast mechanisms of polarization imaging for medical diagnosis.

  18. Animal Models of Diabetic Macrovascular Complications: Key Players in the Development of New Therapeutic Approaches

    Directory of Open Access Journals (Sweden)

    Suvi E. Heinonen

    2015-01-01

    Full Text Available Diabetes mellitus is a lifelong, incapacitating metabolic disease associated with chronic macrovascular complications (coronary heart disease, stroke, and peripheral vascular disease and microvascular disorders leading to damage of the kidneys (nephropathy and eyes (retinopathy. Based on the current trends, the rising prevalence of diabetes worldwide will lead to increased cardiovascular morbidity and mortality. Therefore, novel means to prevent and treat these complications are needed. Under the auspices of the IMI (Innovative Medicines Initiative, the SUMMIT (SUrrogate markers for Micro- and Macrovascular hard end points for Innovative diabetes Tools consortium is working on the development of novel animal models that better replicate vascular complications of diabetes and on the characterization of the available models. In the past years, with the high level of genomic information available and more advanced molecular tools, a very large number of models has been created. Selecting the right model for a specific study is not a trivial task and will have an impact on the study results and their interpretation. This review gathers information on the available experimental animal models of diabetic macrovascular complications and evaluates their pros and cons for research purposes as well as for drug development.

  19. TRI Microspheres prevent key signs of dry eye disease in a murine, inflammatory model.

    Science.gov (United States)

    Ratay, Michelle L; Balmert, Stephen C; Acharya, Abhinav P; Greene, Ashlee C; Meyyappan, Thiagarajan; Little, Steven R

    2017-12-13

    Dry eye disease (DED) is a highly prevalent, ocular disorder characterized by an abnormal tear film and ocular surface. Recent experimental data has suggested that the underlying pathology of DED involves inflammation of the lacrimal functional unit (LFU), comprising the cornea, conjunctiva, lacrimal gland and interconnecting innervation. This inflammation of the LFU ultimately results in tissue deterioration and the symptoms of DED. Moreover, an increase of pathogenic lymphocyte infiltration and the secretion of pro-inflammatory cytokines are involved in the propagation of DED-associated inflammation. Studies have demonstrated that the adoptive transfer of regulatory T cells (Tregs) can mediate the inflammation caused by pathogenic lymphocytes. Thus, as an approach to treating the inflammation associated with DED, we hypothesized that it was possible to enrich the body's own endogenous Tregs by locally delivering a specific combination of Treg inducing factors through degradable polymer microspheres (TRI microspheres; TGF-β1, Rapamycin (Rapa), and IL-2). This local controlled release system is capable of shifting the balance of Treg/T effectors and, in turn, preventing key signs of dry eye disease such as aqueous tear secretion, conjunctival goblet cells, epithelial corneal integrity, and reduce the pro-inflammatory cytokine milieu in the tissue.

  20. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    Science.gov (United States)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual

  1. Emporium Model: The Key to Content Retention in Secondary Math Courses

    Directory of Open Access Journals (Sweden)

    Sandra Wilder

    2016-07-01

    Full Text Available The math emporium model was first developed by Virginia Tech in 1999. In the emporium model students use computer-based learning resources, engage in active learning, and work toward mastery of concepts. This approach to teaching and learning mathematics was piloted in a rural STEM high school. The purpose of this experimental study was to compare the impact of the emporium model and the traditional approach to instruction on student achievement and retention of algebra. The results indicated that both approaches to instruction were equally effective in improving student mathematics knowledge. However, the findings revealed that the students in the emporium section had significantly higher retention of the content knowledge.

  2. Explaining electric conductivity using the particle-in-a-box model: quantum superposition is the key

    Science.gov (United States)

    Sivanesan, Umaseh; Tsang, Kin; Izmaylov, Artur F.

    2017-12-01

    Most of the textbooks explaining electric conductivity in the context of quantum mechanics provide either incomplete or semi-classical explanations that are not connected with the elementary concepts of quantum mechanics. We illustrate the conduction phenomena using the simplest model system in quantum dynamics, a particle in a box (PIB). To induce the particle dynamics, a linear potential tilting the bottom of the box is introduced, which is equivalent to imposing a constant electric field for a charged particle. Although the PIB model represents a closed system that cannot have a flow of electrons through the system, we consider the oscillatory dynamics of the particle probability density as the analogue of the electric current. Relating the amplitude and other parameters of the particle oscillatory dynamics with the gap between the ground and excited states of the PIB model allows us to demonstrate one of the most basic dependencies of electric conductivity on the valence-conduction band gap of the material.

  3. Physiologically-based toxicokinetic models help identifying the key factors affecting contaminant uptake during flood events

    Energy Technology Data Exchange (ETDEWEB)

    Brinkmann, Markus; Eichbaum, Kathrin [Department of Ecosystem Analysis, Institute for Environmental Research,ABBt – Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); Kammann, Ulrike [Thünen-Institute of Fisheries Ecology, Palmaille 9, 22767 Hamburg (Germany); Hudjetz, Sebastian [Department of Ecosystem Analysis, Institute for Environmental Research,ABBt – Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Cofalla, Catrina [Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Buchinger, Sebastian; Reifferscheid, Georg [Federal Institute of Hydrology (BFG), Department G3: Biochemistry, Ecotoxicology, Am Mainzer Tor 1, 56068 Koblenz (Germany); Schüttrumpf, Holger [Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52056 Aachen (Germany); Preuss, Thomas [Department of Environmental Biology and Chemodynamics, Institute for Environmental Research,ABBt- Aachen Biology and Biotechnology, RWTH Aachen University, Worringerweg 1, 52074 Aachen (Germany); and others

    2014-07-01

    Highlights: • A PBTK model for trout was coupled with a sediment equilibrium partitioning model. • The influence of physical exercise on pollutant uptake was studies using the model. • Physical exercise during flood events can increase the level of biliary metabolites. • Cardiac output and effective respiratory volume were identified as relevant factors. • These confounding factors need to be considered also for bioconcentration studies. - Abstract: As a consequence of global climate change, we will be likely facing an increasing frequency and intensity of flood events. Thus, the ecotoxicological relevance of sediment re-suspension is of growing concern. It is vital to understand contaminant uptake from suspended sediments and relate it to effects in aquatic biota. Here we report on a computational study that utilizes a physiologically based toxicokinetic model to predict uptake, metabolism and excretion of sediment-borne pyrene in rainbow trout (Oncorhynchus mykiss). To this end, data from two experimental studies were compared with the model predictions: (a) batch re-suspension experiments with constant concentration of suspended particulate matter at two different temperatures (12 and 24 °C), and (b) simulated flood events in an annular flume. The model predicted both the final concentrations and the kinetics of 1-hydroxypyrene secretion into the gall bladder of exposed rainbow trout well. We were able to show that exhaustive exercise during exposure in simulated flood events can lead to increased levels of biliary metabolites and identified cardiac output and effective respiratory volume as the two most important factors for contaminant uptake. The results of our study clearly demonstrate the relevance and the necessity to investigate uptake of contaminants from suspended sediments under realistic exposure scenarios.

  4. Physiologically-based toxicokinetic models help identifying the key factors affecting contaminant uptake during flood events

    International Nuclear Information System (INIS)

    Brinkmann, Markus; Eichbaum, Kathrin; Kammann, Ulrike; Hudjetz, Sebastian; Cofalla, Catrina; Buchinger, Sebastian; Reifferscheid, Georg; Schüttrumpf, Holger; Preuss, Thomas

    2014-01-01

    Highlights: • A PBTK model for trout was coupled with a sediment equilibrium partitioning model. • The influence of physical exercise on pollutant uptake was studies using the model. • Physical exercise during flood events can increase the level of biliary metabolites. • Cardiac output and effective respiratory volume were identified as relevant factors. • These confounding factors need to be considered also for bioconcentration studies. - Abstract: As a consequence of global climate change, we will be likely facing an increasing frequency and intensity of flood events. Thus, the ecotoxicological relevance of sediment re-suspension is of growing concern. It is vital to understand contaminant uptake from suspended sediments and relate it to effects in aquatic biota. Here we report on a computational study that utilizes a physiologically based toxicokinetic model to predict uptake, metabolism and excretion of sediment-borne pyrene in rainbow trout (Oncorhynchus mykiss). To this end, data from two experimental studies were compared with the model predictions: (a) batch re-suspension experiments with constant concentration of suspended particulate matter at two different temperatures (12 and 24 °C), and (b) simulated flood events in an annular flume. The model predicted both the final concentrations and the kinetics of 1-hydroxypyrene secretion into the gall bladder of exposed rainbow trout well. We were able to show that exhaustive exercise during exposure in simulated flood events can lead to increased levels of biliary metabolites and identified cardiac output and effective respiratory volume as the two most important factors for contaminant uptake. The results of our study clearly demonstrate the relevance and the necessity to investigate uptake of contaminants from suspended sediments under realistic exposure scenarios

  5. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2017-12-01

    Full Text Available Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE-based deep neural networks (DNNs to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs and backpropagation neural networks (BPNNs.

  6. Green Infrastructure Design Based on Spatial Conservation Prioritization and Modeling of Biodiversity Features and Ecosystem Services.

    Science.gov (United States)

    Snäll, Tord; Lehtomäki, Joona; Arponen, Anni; Elith, Jane; Moilanen, Atte

    2016-02-01

    There is high-level political support for the use of green infrastructure (GI) across Europe, to maintain viable populations and to provide ecosystem services (ES). Even though GI is inherently a spatial concept, the modern tools for spatial planning have not been recognized, such as in the recent European Environment Agency (EEA) report. We outline a toolbox of methods useful for GI design that explicitly accounts for biodiversity and ES. Data on species occurrence, habitats, and environmental variables are increasingly available via open-access internet platforms. Such data can be synthesized by statistical species distribution modeling, producing maps of biodiversity features. These, together with maps of ES, can form the basis for GI design. We argue that spatial conservation prioritization (SCP) methods are effective tools for GI design, as the overall SCP goal is cost-effective allocation of conservation efforts. Corridors are currently promoted by the EEA as the means for implementing GI design, but they typically target the needs of only a subset of the regional species pool. SCP methods would help to ensure that GI provides a balanced solution for the requirements of many biodiversity features (e.g., species, habitat types) and ES simultaneously in a cost-effective manner. Such tools are necessary to make GI into an operational concept for combating biodiversity loss and promoting ES.

  7. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis.

    Science.gov (United States)

    Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon

    2017-12-11

    Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs).

  8. Vascular dynamics aid a coupled neurovascular network learn sparse independent features: A computational model

    Directory of Open Access Journals (Sweden)

    Ryan Thomas Philips

    2016-02-01

    Full Text Available Cerebral vascular dynamics are generally thought to be controlled by neural activity in a unidirectional fashion. However, both computational modeling and experimental evidence point to the feedback effects of vascular dynamics on neural activity. Vascular feedback in the form of glucose and oxygen controls neuronal ATP, either directly or via the agency of astrocytes, which in turn modulates neural firing. Recently, a detailed model of the neuron-astrocyte-vessel system has shown how vasomotion can modulate neural firing. Similarly, arguing from known cerebrovascular physiology, an approach known as `hemoneural hypothesis' postulates functional modulation of neural activity by vascular feedback. To instantiate this perspective, we present a computational model in which a network of `vascular units' supplies energy to a neural network. The complex dynamics of the vascular network, modeled by a network of oscillators, turns neurons ON and OFF randomly. The informational consequence of such dynamics is explored in the context of an auto-encoder network. In the proposed model, each vascular unit supplies energy to a subset of hidden neurons of an autoencoder network, which constitutes its `projective field'. Neurons that receive adequate energy in a given trial have reduced threshold, and thus are prone to fire. Dynamics of the vascular network are governed by changes in the reconstruction error of the auto-encoder network, interpreted as the neuronal demand. Vascular feedback causes random inactivation of a subset of hidden neurons in every trial. We observe that, under conditions of desynchronized vascular dynamics, the output reconstruction error is low and the feature vectors learnt are sparse and independent. Our earlier modeling study highlighted the link between desynchronized vascular dynamics and efficient energy delivery in skeletal muscle. We now show that desynchronized vascular dynamics leads to efficient training in an auto

  9. Evaluating predictive models for solar energy growth in the US states and identifying the key drivers

    Science.gov (United States)

    Chakraborty, Joheen; Banerji, Sugata

    2018-03-01

    Driven by a desire to control climate change and reduce the dependence on fossil fuels, governments around the world are increasing the adoption of renewable energy sources. However, among the US states, we observe a wide disparity in renewable penetration. In this study, we have identified and cleaned over a dozen datasets representing solar energy penetration in each US state, and the potentially relevant socioeconomic and other factors that may be driving the growth in solar. We have applied a number of predictive modeling approaches - including machine learning and regression - on these datasets over a 17-year period and evaluated the relative performance of the models. Our goals were: (1) identify the most important factors that are driving the growth in solar, (2) choose the most effective predictive modeling technique for solar growth, and (3) develop a model for predicting next year’s solar growth using this year’s data. We obtained very promising results with random forests (about 90% efficacy) and varying degrees of success with support vector machines and regression techniques (linear, polynomial, ridge). We also identified states with solar growth slower than expected and representing a potential for stronger growth in future.

  10. Integrating semantics and procedural generation: key enabling factors for declarative modeling of virtual worlds

    NARCIS (Netherlands)

    Bidarra, R.; Kraker, K.J. de; Smelik, R.M.; Tutenel, T.

    2010-01-01

    Manual content creation for virtual worlds can no longer satisfy the increasing demand arising from areas as entertainment and serious games, simulations, movies, etc. Furthermore, currently deployed modeling tools basically do not scale up: while they become more and more specialized and complex,

  11. Thermodynamic modeling of the Ge-Ti system supported by key experiment

    International Nuclear Information System (INIS)

    Liu, Dandan; Yan, Huanli; Yuan, Xiaoming; Chung, Yoonsung; Du, Yong; Xu, Honghui; Liu, Libin; Nash, Philip

    2011-01-01

    Highlights: → All of the experimental phase diagram and thermodynamic data available for the Ge-Ti system have been critically evaluated. → The general feature of the Ge-Ti system and enthalpy of formation of Ti 5 Ge 3 have been checked via experiment. The annealed samples are characterized by X-ray diffraction, scanning electron microscope and differential thermal analysis. → An optimum thermodynamic data set for the Ge-Ti system was obtained. The comprehensive comparison shows that the calculated phase diagram and thermodynamic properties are in good agreement with the experimental data. - Abstract: A complete thermodynamic investigation of the Ge-Ti system was performed in this study. Seven samples were prepared by arc-melting the pure elements in order to check the literature data on phase diagram and enthalpy of formation of Ti 5 Ge 3 . The samples were annealed at certain temperatures for extended periods of time, and then quenched. Both the as-cast and annealed samples were examined by X-ray diffraction (XRD) analysis and scanning electron microscope (SEM) technology. The phase transformation temperatures were measured by differential thermal analysis (DTA). The measurement on enthalpy of formation for Ti 5 Ge 3 was performed using the Kleppa-type HTRC with the calorimeter temperature set at 1100 ± 2 o C. Based upon the literature data and current experimental results, the Ge-Ti system was critically assessed by means of CALPHAD approach. The calculated phase diagram and thermodynamic properties agree well with the literature data and the present experimental results.

  12. Stargardt disease: clinical features, molecular genetics, animal models and therapeutic options.

    Science.gov (United States)

    Tanna, Preena; Strauss, Rupert W; Fujinami, Kaoru; Michaelides, Michel

    2017-01-01

    Stargardt disease (STGD1; MIM 248200) is the most prevalent inherited macular dystrophy and is associated with disease-causing sequence variants in the gene ABCA4 Significant advances have been made over the last 10 years in our understanding of both the clinical and molecular features of STGD1, and also the underlying pathophysiology, which has culminated in ongoing and planned human clinical trials of novel therapies. The aims of this review are to describe the detailed phenotypic and genotypic characteristics of the disease, conventional and novel imaging findings, current knowledge of animal models and pathogenesis, and the multiple avenues of intervention being explored. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Primordial inhomogeneities in the expanding universe. II - General features of spherical models at late times

    Science.gov (United States)

    Olson, D. W.; Silk, J.

    1979-01-01

    This paper studies the density profile that forms around a spherically symmetric bound central core immersed in a homogeneous-background k = 0 or k = -1 Friedmann-Robertson-Walker cosmological model, with zero pressure. Although the density profile in the linearized regime is almost arbitrary, in the nonlinear regime certain universal features of the density profile are obtained that are independent of the details of the initial conditions. The formation of 'halos' ('holes') with densities greater than (less than) the average cosmological density is discussed. It is shown that in most regions 'halos' form, and universal values are obtained for the slope of the ln (density)-ln (radius) profile in those 'halos' at late times, independently of the shape of the initial density profile. Restrictions are derived on where it is possible for 'holes' to exist at late times and on how such 'holes' must have evolved.

  14. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    International Nuclear Information System (INIS)

    Stroev, N E; Iosilevskiy, I L

    2016-01-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 < X < 1. Strong “distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications. (paper)

  15. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    Science.gov (United States)

    Stroev, N. E.; Iosilevskiy, I. L.

    2016-11-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications.

  16. [Attributes and features of a community health model from the perspective of practitioners].

    Science.gov (United States)

    Dois, Angelina; Bravo, Paulina; Soto, Gabriela

    2017-07-01

    The Family and Community Health Model is based on three essential principles: user-centered care, comprehensive care and continuity of care. To describe the attributes and characteristics of the guiding principles of the Family and Community Health Model (FHM) from the perspective of primary care experts. This was a qualitative study. An electronic Delphi was conducted with 29 national experts on primary care. The experts agree that user centered care must be based on a psycho-social model integrating the multiple factors that influence health problems. It also must integrate patients' individual features, family and environmental issues. The proposed actions promote shared decision making. To promote integral care, anticipatory guidelines should be expanded and health care of patients with chronic conditions should be improved. Continuity of care should be promoted increasing working hours of medical centers and easing access to integrated electronic medical records, thereby generating efficient links between the different care levels. The results of the study can guide the clinical and administrative management of health teams, allowing the strengthening of primary health care according to the local realities.

  17. Model features as the basis of preparation of boxers individualization principal level (elite

    Directory of Open Access Journals (Sweden)

    O.J. Pavelec

    2013-10-01

    Full Text Available Purpose to improve the system of training boxers of higher categories (elite. Individualization of the training process using the model characteristics special physical preparedness. Materials : The study was conducted during 2000-2010. Participated boxers national team of Ukraine in the amount of 43 people. Of those honored masters of sport 6, masters of sports of international class 16, masters of sports 21. The average age of the athletes 23.5 years. Results : justified and features a specially designed model of physical fitness boxing class. It is established that the boxers middle weight classes (64 75 kg have an advantage over other boxers weight categories (light and after a hard in the development of speed and strength endurance. The presented model characteristics can guide the professional fitness boxing (elite, as representatives of the sport. Conclusions : It is established that the structure of the special physical training boxers depends on many components, such as weight category, tactical fighter role, skill level, stage of preparation.

  18. Cultural Branding as a Key in Positioning Schools: A Conceptual Model

    Directory of Open Access Journals (Sweden)

    Hidayatun

    2017-08-01

    Full Text Available The increase of people’s prosperity and education creates a change in their view about education and the need towards it. Consequently, their choice of educational institutions becomes more selective. On the other hand, the competition in this field becomes more viable due to the growth of the educational institutions. The management strategy should be evaluated. This paper discusses the interfaces between culture and school, especially those that refer to the branding. The study was carried out on a premise that creating a bond between the school and community is possible by adopting the culture in a formal education environment. This effort is expected to help schools to get a certain position in the community. Therefore, this study attempts to promote a conceptual model of cultural branding in schools and to reveal the reasons why the model becomes an effective marketing strategy in this era.

  19. A lock-and-key model for protein–protein interactions

    OpenAIRE

    Morrison, Julie L.; Breitling, Rainer; Higham, Desmond J.; Gilbert, David R.

    2006-01-01

    Motivation: Protein–protein interaction networks are one of the major post-genomic data sources available to molecular biologists. They provide a comprehensive view of the global interaction structure of an organism’s proteome, as well as detailed information on specific interactions. Here we suggest a physical model of protein interactions that can be used to extract additional information at an intermediate level: It enables us to identify proteins which share biological interaction motifs,...

  20. Scenario-Led Habitat Modelling of Land Use Change Impacts on Key Species.

    Directory of Open Access Journals (Sweden)

    Matthew Geary

    Full Text Available Accurate predictions of the impacts of future land use change on species of conservation concern can help to inform policy-makers and improve conservation measures. If predictions are spatially explicit, predicted consequences of likely land use changes could be accessible to land managers at a scale relevant to their working landscape. We introduce a method, based on open source software, which integrates habitat suitability modelling with scenario-building, and illustrate its use by investigating the effects of alternative land use change scenarios on landscape suitability for black grouse Tetrao tetrix. Expert opinion was used to construct five near-future (twenty years scenarios for the 800 km2 study site in upland Scotland. For each scenario, the cover of different land use types was altered by 5-30% from 20 random starting locations and changes in habitat suitability assessed by projecting a MaxEnt suitability model onto each simulated landscape. A scenario converting grazed land to moorland and open forestry was the most beneficial for black grouse, and 'increased grazing' (the opposite conversion the most detrimental. Positioning of new landscape blocks was shown to be important in some situations. Increasing the area of open-canopy forestry caused a proportional decrease in suitability, but suitability gains for the 'reduced grazing' scenario were nonlinear. 'Scenario-led' landscape simulation models can be applied in assessments of the impacts of land use change both on individual species and also on diversity and community measures, or ecosystem services. A next step would be to include landscape configuration more explicitly in the simulation models, both to make them more realistic, and to examine the effects of habitat placement more thoroughly. In this example, the recommended policy would be incentives on grazing reduction to benefit black grouse.

  1. MODELING THE FORMATION OF GIANT PLANET CORES. I. EVALUATING KEY PROCESSES

    International Nuclear Information System (INIS)

    Levison, Harold F.; Thommes, Edward; Duncan, Martin J.

    2010-01-01

    One of the most challenging problems we face in our understanding of planet formation is how Jupiter and Saturn could have formed before the solar nebula dispersed. The most popular model of giant planet formation is the so-called core accretion model. In this model a large planetary embryo formed first, mainly by two-body accretion. This is then followed by a period of inflow of nebular gas directly onto the growing planet. The core accretion model has an Achilles heel, namely the very first step. We have undertaken the most comprehensive study of this process to date. In this study, we numerically integrate the orbits of a number of planetary embryos embedded in a swarm of planetesimals. In these experiments, we have included a large number of physical processes that might enhance accretion. In particular, we have included (1) aerodynamic gas drag, (2) collisional damping between planetesimals, (3) enhanced embryo cross sections due to their atmospheres, (4) planetesimal fragmentation, and (5) planetesimal-driven migration. We find that the gravitational interaction between the embryos and the planetesimals leads to the wholesale redistribution of material-regions are cleared of material and gaps open near the embryos. Indeed, in 90% of our simulations without fragmentation, the region near those embryos is cleared of planetesimals before much growth can occur. Thus, the widely used assumption that the surface density distribution of planetesimals is smooth can lead to misleading results. In the remaining 10% of our simulations, the embryos undergo a burst of outward migration that significantly increases growth. On timescales of ∼10 5 years, the outer embryo can migrate ∼6 AU and grow to roughly 30 M + . This represents a largely unexplored mode of core formation. We also find that the inclusion of planetesimal fragmentation tends to inhibit growth except for a narrow range of fragment migration rates.

  2. A Hybrid Network Model to Extract Key Criteria and Its Application for Brand Equity Evaluation

    Directory of Open Access Journals (Sweden)

    Chin-Yi Chen

    2012-01-01

    Full Text Available Making a decision implies that there are alternative choices to be considered, and a major challenge of decision-making is to identify the adequate criteria for program planning or problem evaluation. The decision-makers’ criteria consists of the characteristics or requirements each alternative must possess and the alternatives are rated on how well they possess each criterion. We often use criteria developed and used by different researchers and institutions, and these criteria have similar means and can be substituted for one another. Choosing from existing criteria offers a practical method to engineers hoping to derive a set of criteria for evaluating objects or programs. We have developed a hybrid model for extracting evaluation criteria which considers substitutions between the criteria. The model is developed based on Social Network Analysis and Maximum Mean De-Entropy algorithms. In this paper, the introduced methodology will also be applied to analyze the criteria for assessing brand equity as an application example. The proposed model demonstrates that it is useful in planning feasibility criteria and has applications in other evaluation-planning purposes.

  3. Simultaneous detection of landmarks and key-frame in cardiac perfusion MRI using a joint spatial-temporal context model

    Science.gov (United States)

    Lu, Xiaoguang; Xue, Hui; Jolly, Marie-Pierre; Guetter, Christoph; Kellman, Peter; Hsu, Li-Yueh; Arai, Andrew; Zuehlsdorff, Sven; Littmann, Arne; Georgescu, Bogdan; Guehring, Jens

    2011-03-01

    Cardiac perfusion magnetic resonance imaging (MRI) has proven clinical significance in diagnosis of heart diseases. However, analysis of perfusion data is time-consuming, where automatic detection of anatomic landmarks and key-frames from perfusion MR sequences is helpful for anchoring structures and functional analysis of the heart, leading toward fully automated perfusion analysis. Learning-based object detection methods have demonstrated their capabilities to handle large variations of the object by exploring a local region, i.e., context. Conventional 2D approaches take into account spatial context only. Temporal signals in perfusion data present a strong cue for anchoring. We propose a joint context model to encode both spatial and temporal evidence. In addition, our spatial context is constructed not only based on the landmark of interest, but also the landmarks that are correlated in the neighboring anatomies. A discriminative model is learned through a probabilistic boosting tree. A marginal space learning strategy is applied to efficiently learn and search in a high dimensional parameter space. A fully automatic system is developed to simultaneously detect anatomic landmarks and key frames in both RV and LV from perfusion sequences. The proposed approach was evaluated on a database of 373 cardiac perfusion MRI sequences from 77 patients. Experimental results of a 4-fold cross validation show superior landmark detection accuracies of the proposed joint spatial-temporal approach to the 2D approach that is based on spatial context only. The key-frame identification results are promising.

  4. Exploring Secondary Students' Epistemological Features Depending on the Evaluation Levels of the Group Model on Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kim, Heui-Baik

    2014-01-01

    The purpose of this study is to identify the epistemological features and model qualities depending on model evaluation levels and to explore the reasoning process behind high-level evaluation through small group interaction about blood circulation. Nine groups of three to four students in the eighth grade participated in the modeling practice.…

  5. Intrusion detection model using fusion of chi-square feature selection and multi class SVM

    Directory of Open Access Journals (Sweden)

    Ikram Sumaiya Thaseen

    2017-10-01

    Full Text Available Intrusion detection is a promising area of research in the domain of security with the rapid development of internet in everyday life. Many intrusion detection systems (IDS employ a sole classifier algorithm for classifying network traffic as normal or abnormal. Due to the large amount of data, these sole classifier models fail to achieve a high attack detection rate with reduced false alarm rate. However by applying dimensionality reduction, data can be efficiently reduced to an optimal set of attributes without loss of information and then classified accurately using a multi class modeling technique for identifying the different network attacks. In this paper, we propose an intrusion detection model using chi-square feature selection and multi class support vector machine (SVM. A parameter tuning technique is adopted for optimization of Radial Basis Function kernel parameter namely gamma represented by ‘ϒ’ and over fitting constant ‘C’. These are the two important parameters required for the SVM model. The main idea behind this model is to construct a multi class SVM which has not been adopted for IDS so far to decrease the training and testing time and increase the individual classification accuracy of the network attacks. The investigational results on NSL-KDD dataset which is an enhanced version of KDDCup 1999 dataset shows that our proposed approach results in a better detection rate and reduced false alarm rate. An experimentation on the computational time required for training and testing is also carried out for usage in time critical applications.

  6. Key issues

    International Nuclear Information System (INIS)

    Cook, N.G.W.

    1980-01-01

    Successful modeling of the thermo-mechanical and hydrochemical behavior of radioactive waste repositories in hard rock is possible in principle. Because such predictions lie outside the realm of experience, their adequacy depends entirely upon a thorough understanding of three fundamental questions: an understanding of the chemical and physical processess that determine the behavior of rock and all its complexities; accurate and realistic numerical models of the geologic media within which a repository may be built; and sufficient in-situ data covering the entire geologic region affected by, or effecting the behavior of a repository. At present sufficient is known to be able to identify most of those areas which require further attention. These areas extend all the way from a complete understanding of the chemical and physical processes determining the behavior of rock through to the exploration mapping and testing that must be done during the development of any potential repository. Many of the techniques, laboratory equipment, field instrumentation, and numerical methods needed to accomplish this do not exist at present. Therefore it is necessary to accept that a major investment in scientific research is required to generate this information over the next few years. The spectrum of scientific and engineering activities is wide extending from laboratory measurements through the development of numerical models to the measurement of data in-situ, but there is every prospect that sufficient can be done to resolve these key issues. However, to do so requires overt recognition of the many gaps which exist in our knowledge and abilities today, and of the need to bridge these gaps and of the significant costs involved in doing so

  7. A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection

    Directory of Open Access Journals (Sweden)

    Simeone Marino

    2016-10-01

    Full Text Available Tuberculosis (TB is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB. The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs, take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung to lung-draining lymph nodes (LNs where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim describing in silico immune cell, bacterial (Mtb and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form, lung draining lymph node (LN, site of generation of adaptive immunity and blood (a measurable compartment. Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM, while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs. In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is

  8. A mouse model of harlequin ichthyosis delineates a key role for Abca12 in lipid homeostasis.

    Directory of Open Access Journals (Sweden)

    Ian Smyth

    2008-09-01

    Full Text Available Harlequin Ichthyosis (HI is a severe and often lethal hyperkeratotic skin disease caused by mutations in the ABCA12 transport protein. In keratinocytes, ABCA12 is thought to regulate the transfer of lipids into small intracellular trafficking vesicles known as lamellar bodies. However, the nature and scope of this regulation remains unclear. As part of an original recessive mouse ENU mutagenesis screen, we have identified and characterised an animal model of HI and showed that it displays many of the hallmarks of the disease including hyperkeratosis, loss of barrier function, and defects in lipid homeostasis. We have used this model to follow disease progression in utero and present evidence that loss of Abca12 function leads to premature differentiation of basal keratinocytes. A comprehensive analysis of lipid levels in mutant epidermis demonstrated profound defects in lipid homeostasis, illustrating for the first time the extent to which Abca12 plays a pivotal role in maintaining lipid balance in the skin. To further investigate the scope of Abca12's activity, we have utilised cells from the mutant mouse to ascribe direct transport functions to the protein and, in doing so, we demonstrate activities independent of its role in lamellar body function. These cells have severely impaired lipid efflux leading to intracellular accumulation of neutral lipids. Furthermore, we identify Abca12 as a mediator of Abca1-regulated cellular cholesterol efflux, a finding that may have significant implications for other diseases of lipid metabolism and homeostasis, including atherosclerosis.

  9. Feature scale modeling for etching and deposition processes in semiconductor manufacturing

    International Nuclear Information System (INIS)

    Pyka, W.

    2000-04-01

    modeling of ballistic transport determined low-pressure processes, the equations for the calculation of local etching and deposition rates have been revised. New extensions like the full relation between angular and radial target emission characteristics and particle distributions resulting at different positions on the wafer have been added, and results from reactor scale simulations have been linked to the feature scale profile evolution. Moreover, a fitting model has been implemented, which reduces the number of parameters for particle distributions, scattering mechanisms, and angular dependent surface interactions. Concerning diffusion determined high-pressure CVD processes, a continuum transport and reaction model for the first time has been implemented in three dimensions. It comprises a flexible interface for the formulation of the involved process chemistry and derives the local deposition rate from a finite element diffusion calculation carried out on the three-dimensional mesh of the gas domain above the feature. For each time-step of the deposition simulation the mesh is automatically generated as counterpart to the surface of the three-dimensional structure evolving with time. The CVD model has also been coupled with equipment simulations. (author)

  10. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  11. The Electric Vehicles Ecosystem Model: Construct, Analysis and Identification of Key Challenges

    Directory of Open Access Journals (Sweden)

    Zulkarnain

    2014-09-01

    Full Text Available This paper builds a conceptual model of electric vehicles’ (EV ecosystem and value chain build-up. Based on the literature, the research distinguishes the most critical challenges that are on the way of mobility systems’ electrification. Consumers still have some questions that call for answers before they are ready to adopt evs.With regard to technical aspects, some challenges are coming from vehicles, charging infrastructure, battery technology, and standardization. The use of battery in EVs will bring in additional environmental challenges, coming from the battery life cycle for used battery, the manufacturing, and from some materials used and treated in the manufacturing process. The policy aspects include mostly taxation strategies. For most part, established market conditions are still lacking and there are a number of unresolved challenges on both supply and demand side of the EV market.

  12. Probing molecular mechanisms of the Hsp90 chaperone: biophysical modeling identifies key regulators of functional dynamics.

    Directory of Open Access Journals (Sweden)

    Anshuman Dixit

    Full Text Available Deciphering functional mechanisms of the Hsp90 chaperone machinery is an important objective in cancer biology aiming to facilitate discovery of targeted anti-cancer therapies. Despite significant advances in understanding structure and function of molecular chaperones, organizing molecular principles that control the relationship between conformational diversity and functional mechanisms of the Hsp90 activity lack a sufficient quantitative characterization. We combined molecular dynamics simulations, principal component analysis, the energy landscape model and structure-functional analysis of Hsp90 regulatory interactions to systematically investigate functional dynamics of the molecular chaperone. This approach has identified a network of conserved regions common to the Hsp90 chaperones that could play a universal role in coordinating functional dynamics, principal collective motions and allosteric signaling of Hsp90. We have found that these functional motifs may be utilized by the molecular chaperone machinery to act collectively as central regulators of Hsp90 dynamics and activity, including the inter-domain communications, control of ATP hydrolysis, and protein client binding. These findings have provided support to a long-standing assertion that allosteric regulation and catalysis may have emerged via common evolutionary routes. The interaction networks regulating functional motions of Hsp90 may be determined by the inherent structural architecture of the molecular chaperone. At the same time, the thermodynamics-based "conformational selection" of functional states is likely to be activated based on the nature of the binding partner. This mechanistic model of Hsp90 dynamics and function is consistent with the notion that allosteric networks orchestrating cooperative protein motions can be formed by evolutionary conserved and sparsely connected residue clusters. Hence, allosteric signaling through a small network of distantly connected

  13. Using probabilistic model as feature descriptor on a smartphone device for autonomous navigation of unmanned ground vehicles

    Science.gov (United States)

    Desai, Alok; Lee, Dah-Jye

    2013-12-01

    There has been significant research on the development of feature descriptors in the past few years. Most of them do not emphasize real-time applications. This paper presents the development of an affine invariant feature descriptor for low resource applications such as UAV and UGV that are equipped with an embedded system with a small microprocessor, a field programmable gate array (FPGA), or a smart phone device. UAV and UGV have proven suitable for many promising applications such as unknown environment exploration, search and rescue operations. These applications required on board image processing for obstacle detection, avoidance and navigation. All these real-time vision applications require a camera to grab images and match features using a feature descriptor. A good feature descriptor will uniquely describe a feature point thus allowing it to be correctly identified and matched with its corresponding feature point in another image. A few feature description algorithms are available for a resource limited system. They either require too much of the device's resource or too much simplification on the algorithm, which results in reduction in performance. This research is aimed at meeting the needs of these systems without sacrificing accuracy. This paper introduces a new feature descriptor called PRObabilistic model (PRO) for UGV navigation applications. It is a compact and efficient binary descriptor that is hardware-friendly and easy for implementation.

  14. Human Skeleton Model Based Dynamic Features for Walking Speed Invariant Gait Recognition

    Directory of Open Access Journals (Sweden)

    Jure Kovač

    2014-01-01

    Full Text Available Humans are able to recognize small number of people they know well by the way they walk. This ability represents basic motivation for using human gait as the means for biometric identification. Such biometrics can be captured at public places from a distance without subject's collaboration, awareness, and even consent. Although current approaches give encouraging results, we are still far from effective use in real-life applications. In general, methods set various constraints to circumvent the influence of covariate factors like changes of walking speed, view, clothing, footwear, and object carrying, that have negative impact on recognition performance. In this paper we propose a skeleton model based gait recognition system focusing on modelling gait dynamics and eliminating the influence of subjects appearance on recognition. Furthermore, we tackle the problem of walking speed variation and propose space transformation and feature fusion that mitigates its influence on recognition performance. With the evaluation on OU-ISIR gait dataset, we demonstrate state of the art performance of proposed methods.

  15. The construction features of the deformation and force model of concrete and reinforced concrete resistance

    Directory of Open Access Journals (Sweden)

    Romashko Vasyl

    2017-01-01

    Full Text Available The main features of the deformation and force model of deformation of reinforced concrete elements and structures based on generalized diagrams of their state are considered in the article. Particular attention is focused on the basic methodological problems and shortcomings of modern "deformation" models. It is shown that in the most cases these problems can be solved by the generalized diagrams of reinforced concrete elements and structures real state. Thanks to these diagrams, the developed method: provides a single methodological approach to the calculation of reinforced concrete elements and structures normal sections for limit states; allows to reveal the internal static indeterminacy of heterogeneously deformable elements and structures in their ultimate limit state calculation; justifies the application of the basic and derived criteria of reinforced concrete elements and structures bearing capacity exhaustion; retains the essence of the physical processes of concrete and reinforced concrete structures deformation. The defining positions of the generalized (universal methodology for calculating reinforced concrete elements and structures are stated.

  16. A neonatal mouse model of intermittent hypoxia associated with features of apnea in premature infants.

    Science.gov (United States)

    Cai, Jun; Tuong, Chi Minh; Gozal, David

    2011-09-15

    A neonatal mouse model of intermittent hypoxia (IH) simulating the recurring hypoxia/reoxygenation episodes of apnea of prematurity (AOP) was developed. C57BL/6 P2 pups were culled for exposure to either intermittent hypoxia or intermittent air as control. The IH paradigms consisted of alternation cycles of 20.9% O2 and either 8.0% or 5.7% O2 every 120 or 140s for 6h a day during daylight hours from day 2 to day 10 postnatally, i.e., roughly equivalent to human brain development in the perinatal period. IH exposures elicited modest to severe decrease in oxygen saturation along with bradycardia in neonatal mice, which were severity-dependent. Hypomyelination in both central and peripheral nervous systems was observed despite the absence of visible growth retardation. The neonatal mouse model of IH in this study partially fulfills the current diagnostic criteria with features of AOP, and provides opportunities to reproduce in rodents some of the pathophysiological changes associated with this disorder, such as alterations in myelination. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Key intermediates in nitrogen transformation during microwave pyrolysis of sewage sludge: a protein model compound study.

    Science.gov (United States)

    Zhang, Jun; Tian, Yu; Cui, Yanni; Zuo, Wei; Tan, Tao

    2013-03-01

    The nitrogen transformations with attention to NH3 and HCN were investigated at temperatures of 300-800°C during microwave pyrolysis of a protein model compound. The evolution of nitrogenated compounds in the char, tar and gas products were conducted. The amine-N, heterocyclic-N and nitrile-N compounds were identified as three important intermediates during the pyrolysis. NH3 and HCN were formed with comparable activation energies competed to consume the same reactive substances at temperatures of 300-800°C. The deamination and dehydrogenation of amine-N compounds from protein cracking contributed to the formation of NH3 (about 8.9% of Soy-N) and HCN (6.6%) from 300 to 500°C. The cracking of nitrile-N and heterocyclic-N compounds from the dehydrogenation and polymerization of amine-N generated HCN (13.4%) and NH3 (31.3%) between 500 and 800°C. It might be able to reduce the HCN and NH3 emissions through controlling the intermediates production at temperatures of 500-800°C. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Energy Technology Data Exchange (ETDEWEB)

    Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  19. HIGHLY QUALIFIED WORKING FORCE – KEY ELEMENT OF INNOVATIVE DEVELOPMENT MODEL

    Directory of Open Access Journals (Sweden)

    M. Avksientiev

    2014-12-01

    Full Text Available Highly qualified working force is a central element of intensive development model in modern society. The article surveys the experience of countries that managed to transform their economy to the innovative one. Ukrainian economy cannot stand aside processes that dominate the world economy trends, thus we are to use this experience to succeed in future. Today any government of the world is facing challenges that occur due to transformation of the economy into informational one. This type of economy causes its transformation form extensive to intensive one. The main reasons under that is limitation of nature resources, material factors of production. Thus this approach depends much on the quality of working force. Unfortunately in Ukraine there is a misbalance in specialist preparation. This puts additional pressure on the educational sphere also. In order to avoid this pressure we are to conduct reforms in education sphere. Nowadays, in the world views and concepts of governmental role in the social development are changing. This why, even at times of economic recession educational costs are not reduced under the new economical doctrine in the EU. Highly qualified specialists, while creating new products and services play role of engineers in XXI century. They are to lead their industries to world leading positions. From economic point of view, highly qualified specialists benefit society with higher income rates, taxation and thus, increasing the living standards in society. Thus, the majority if modern scientists prove the importance of highly trained working force for more effective economic development.

  20. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    International Nuclear Information System (INIS)

    Guerrier, C.; Holcman, D.

    2017-01-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  1. Building Analysis for Urban Energy Planning Using Key Indicators on Virtual 3d City Models - the Energy Atlas of Berlin

    Science.gov (United States)

    Krüger, A.; Kolbe, T. H.

    2012-07-01

    In the context of increasing greenhouse gas emission and global demographic change with the simultaneous trend to urbanization, it is a big challenge for cities around the world to perform modifications in energy supply chain and building characteristics resulting in reduced energy consumption and carbon dioxide mitigation. Sound knowledge of energy resource demand and supply including its spatial distribution within urban areas is of great importance for planning strategies addressing greater energy efficiency. The understanding of the city as a complex energy system affects several areas of the urban living, e.g. energy supply, urban texture, human lifestyle, and climate protection. With the growing availability of 3D city models around the world based on the standard language and format CityGML, energy system modelling, analysis and simulation can be incorporated into these models. Both domains will profit from that interaction by bringing together official and accurate building models including building geometries, semantics and locations forming a realistic image of the urban structure with systemic energy simulation models. A holistic view on the impacts of energy planning scenarios can be modelled and analyzed including side effects on urban texture and human lifestyle. This paper focuses on the identification, classification, and integration of energy-related key indicators of buildings and neighbourhoods within 3D building models. Consequent application of 3D city models conforming to CityGML serves the purpose of deriving indicators for this topic. These will be set into the context of urban energy planning within the Energy Atlas Berlin. The generation of indicator objects covering the indicator values and related processing information will be presented on the sample scenario estimation of heating energy consumption in buildings and neighbourhoods. In their entirety the key indicators will form an adequate image of the local energy situation for

  2. Where is the competitive advantage going?: a management model that incorporates people as a key element of the business strategy

    Directory of Open Access Journals (Sweden)

    Emilio García Vega

    2015-09-01

    Full Text Available Competitive advantage is a concept that has evolved in an accelerated way during the last few years. Some scholars and executives claim that people are a fundamental element of its construction. In this line, business management has shown an inclination towards the human resources management – also called “talents” – as the key element of its organizational success. In this journey, the ideas, paradigms and conceptions have been modified in an interesting way. This paper tries to propose these new conceptions facing the organization management challenge, and proposes a management model based on the importance of the people in the competitive advantage administration.

  3. Green sturgeon distribution in the Pacific Ocean estimated from modeled oceanographic features and migration behavior.

    Science.gov (United States)

    Huff, David D; Lindley, Steven T; Wells, Brian K; Chai, Fei

    2012-01-01

    The green sturgeon (Acipenser medirostris), which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt) of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.

  4. Hyperbrain features of team mental models within a juggling paradigm: a proof of concept

    Directory of Open Access Journals (Sweden)

    Edson Filho

    2016-09-01

    Full Text Available Background Research on cooperative behavior and the social brain exists, but little research has focused on real-time motor cooperative behavior and its neural correlates. In this proof of concept study, we explored the conceptual notion of shared and complementary mental models through EEG mapping of two brains performing a real-world interactive motor task of increasing difficulty. We used the recently introduced participative “juggling paradigm,” and collected neuro-physiological and psycho-social data. We were interested in analyzing the between-brains coupling during a dyadic juggling task, and in exploring the relationship between the motor task execution, the jugglers’skill level and the task difficulty. We also investigated how this relationship could be mirrored in the coupled functional organization of the interacting brains. Methods To capture the neural schemas underlying the notion of shared and complementary mental models, we examined the functional connectivity patterns and hyperbrain features of a juggling dyad involved in cooperative motor tasks of increasing difficulty. Jugglers’ cortical activity was measured using two synchronized 32-channel EEG systems during dyadic juggling performed with 3, 4, 5 and 6 balls. Individual and hyperbrain functional connections were quantified through coherence maps calculated across all electrode pairs in the theta and alpha bands (4–8 and 8–12 Hz. Graph metrics were used to typify the global topology and efficiency of the functional networks for the four difficulty levels in the theta and alpha bands. Results Results indicated that, as task difficulty increased, the cortical functional organization of the more skilled juggler became progressively more segregated in both frequency bands, with a small-world organization in the theta band during easier tasks, indicative of a flow-like state in line with the neural efficiency hypothesis. Conversely, more integrated functional patterns

  5. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    Science.gov (United States)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  6. The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making.

    Science.gov (United States)

    Hamker, Fred H

    2008-07-15

    Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.

  7. Modelling efforts needed to advance herpes simplex virus (HSV) vaccine development: Key findings from the World Health Organization Consultation on HSV Vaccine Impact Modelling.

    Science.gov (United States)

    Gottlieb, Sami L; Giersing, Birgitte; Boily, Marie-Claude; Chesson, Harrell; Looker, Katharine J; Schiffer, Joshua; Spicknall, Ian; Hutubessy, Raymond; Broutet, Nathalie

    2017-06-21

    Development of a vaccine against herpes simplex virus (HSV) is an important goal for global sexual and reproductive health. In order to more precisely define the health and economic burden of HSV infection and the theoretical impact and cost-effectiveness of an HSV vaccine, in 2015 the World Health Organization convened an expert consultation meeting on HSV vaccine impact modelling. The experts reviewed existing model-based estimates and dynamic models of HSV infection to outline critical future modelling needs to inform development of a comprehensive business case and preferred product characteristics for an HSV vaccine. This article summarizes key findings and discussions from the meeting on modelling needs related to HSV burden, costs, and vaccine impact, essential data needs to carry out those models, and important model components and parameters. Copyright © 2017. Published by Elsevier Ltd.

  8. Stable isotopes of fossil teeth corroborate key general circulation model predictions for the Last Glacial Maximum in North America

    Science.gov (United States)

    Kohn, Matthew J.; McKay, Moriah

    2010-11-01

    Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.

  9. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  10. Photovoltaic-thermal (PV/T) solar collectors: Features and performance modelling

    International Nuclear Information System (INIS)

    Atienza-Márquez, Antonio; Bruno, Joan Carles; Coronas, Alberto; Korolija, Ivan; Greenough, Richard; Wright, Andy

    2017-01-01

    Currently, the electrical efficiency of photovoltaic (PV) solar cells ranges between 5–25%. One of the most important parameters that affects the electrical efficiency of a PV collector is the temperature of its cells: the higher temperature, the lower is the efficiency. Photovoltaic/thermal (PV/T) technology is a potential solution to ensure an acceptable solar energy conversion. The PV/T technology produces both electrical and thermal energy simultaneously. It is suitable for low temperature applications (25–40 o C) and overall efficiency increases compared to individual collectors. This paper describes an installation in a single-family house where PV/T collectors are coupled with a ground heat exchanger and a heat pump for domestic hot water and space heating purposes. The aim of this work is twofold. First, the features of the PV/T technology are analyzed. Second, a model of a flat-plate PV/T water collector was developed in TRNSYS in order to analyze collectors performance. (author)

  11. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    Science.gov (United States)

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Modelling Feature Interaction Patterns in Nokia Mobile Phones using Coloured Petri Nets and Design/CPN

    DEFF Research Database (Denmark)

    Lorentsen, Louise; Tuovinen, Antti-Pekka; Xu, Jianli

    2002-01-01

    ), WAP browsing, games, etc. All these features are packaged into a handset with a small screen and a special purpose keypad. The limited user interface and the seamless intertwining of logically separate features cause many problems in the software development of the user interface of mobile phones...

  13. A Modified Feature Selection and Artificial Neural Network-Based Day-Ahead Load Forecasting Model for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmad

    2015-12-01

    Full Text Available In the operation of a smart grid (SG, day-ahead load forecasting (DLF is an important task. The SG can enhance the management of its conventional and renewable resources with a more accurate DLF model. However, DLF model development is highly challenging due to the non-linear characteristics of load time series in SGs. In the literature, DLF models do exist; however, these models trade off between execution time and forecast accuracy. The newly-proposed DLF model will be able to accurately predict the load of the next day with a fair enough execution time. Our proposed model consists of three modules; the data preparation module, feature selection and the forecast module. The first module makes the historical load curve compatible with the feature selection module. The second module removes redundant and irrelevant features from the input data. The third module, which consists of an artificial neural network (ANN, predicts future load on the basis of selected features. Moreover, the forecast module uses a sigmoid function for activation and a multi-variate auto-regressive model for weight updating during the training process. Simulations are conducted in MATLAB to validate the performance of our newly-proposed DLF model in terms of accuracy and execution time. Results show that our proposed modified feature selection and modified ANN (m(FS + ANN-based model for SGs is able to capture the non-linearity(ies in the history load curve with 97 . 11 % accuracy. Moreover, this accuracy is achieved at the cost of a fair enough execution time, i.e., we have decreased the average execution time of the existing FS + ANN-based model by 38 . 50 % .

  14. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  15. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.

  16. Using Range-Wide Abundance Modeling to Identify Key Conservation Areas for the Micro-Endemic Bolson Tortoise (Gopherus flavomarginatus.

    Directory of Open Access Journals (Sweden)

    Cinthya A Ureña-Aranda

    Full Text Available A widespread biogeographic pattern in nature is that population abundance is not uniform across the geographic range of species: most occurrence sites have relatively low numbers, whereas a few places contain orders of magnitude more individuals. The Bolson tortoise Gopherus flavomarginatus is endemic to a small region of the Chihuahuan Desert in Mexico, where habitat deterioration threatens this species with extinction. In this study we combined field burrows counts and the approach for modeling species abundance based on calculating the distance to the niche centroid to obtain range-wide abundance estimates. For the Bolson tortoise, we found a robust, negative relationship between observed burrows abundance and distance to the niche centroid, with a predictive capacity of 71%. Based on these results we identified four priority areas for the conservation of this microendemic and threatened tortoise. We conclude that this approach may be a useful approximation for identifying key areas for sampling and conservation efforts in elusive and rare species.

  17. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  18. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  19. Improving ART programme retention and viral suppression are key to maximising impact of treatment as prevention - a modelling study.

    Science.gov (United States)

    McCreesh, Nicky; Andrianakis, Ioannis; Nsubuga, Rebecca N; Strong, Mark; Vernon, Ian; McKinley, Trevelyan J; Oakley, Jeremy E; Goldstein, Michael; Hayes, Richard; White, Richard G

    2017-08-09

    UNAIDS calls for fewer than 500,000 new HIV infections/year by 2020, with treatment-as-prevention being a key part of their strategy for achieving the target. A better understanding of the contribution to transmission of people at different stages of the care pathway can help focus intervention services at populations where they may have the greatest effect. We investigate this using Uganda as a case study. An individual-based HIV/ART model was fitted using history matching. 100 model fits were generated to account for uncertainties in sexual behaviour, HIV epidemiology, and ART coverage up to 2015 in Uganda. A number of different ART scale-up intervention scenarios were simulated between 2016 and 2030. The incidence and proportion of transmission over time from people with primary infection, post-primary ART-naïve infection, and people currently or previously on ART was calculated. In all scenarios, the proportion of transmission by ART-naïve people decreases, from 70% (61%-79%) in 2015 to between 23% (15%-40%) and 47% (35%-61%) in 2030. The proportion of transmission by people on ART increases from 7.8% (3.5%-13%) to between 14% (7.0%-24%) and 38% (21%-55%). The proportion of transmission by ART dropouts increases from 22% (15%-33%) to between 31% (23%-43%) and 56% (43%-70%). People who are currently or previously on ART are likely to play an increasingly large role in transmission as ART coverage increases in Uganda. Improving retention on ART, and ensuring that people on ART remain virally suppressed, will be key in reducing HIV incidence in Uganda.

  20. A bank-fund projection framework with CGE features

    DEFF Research Database (Denmark)

    Jensen, Henning Tarp; Tarp, Finn

    2006-01-01

    In this paper, we present a SAM-based methodology for integrating standard CGE features with a macroeconomic World Bank–International Monetary Fund (IMF) modelling framework. The resulting macro–micro framework is based on optimising agents, but it retains key features from the macroeconomic model...

  1. A Standard Bank-Fund Projection Framework with CGE Features

    DEFF Research Database (Denmark)

    Jensen, Henning Tarp; Tarp, Finn

    2006-01-01

    In this paper, we present a SAM-based methodology for integrating standard CGE features with a macroeconomic World Bank–International Monetary Fund (IMF) modelling framework. The resulting macro–micro framework is based on optimising agents, but it retains key features from the macroeconomic model...

  2. A prototype framework for models of socio-hydrology: identification of key feedback loops and parameterisation approach

    Science.gov (United States)

    Elshafei, Y.; Sivapalan, M.; Tonts, M.; Hipsey, M. R.

    2014-06-01

    It is increasingly acknowledged that, in order to sustainably manage global freshwater resources, it is critical that we better understand the nature of human-hydrology interactions at the broader catchment system scale. Yet to date, a generic conceptual framework for building models of catchment systems that include adequate representation of socioeconomic systems - and the dynamic feedbacks between human and natural systems - has remained elusive. In an attempt to work towards such a model, this paper outlines a generic framework for models of socio-hydrology applicable to agricultural catchments, made up of six key components that combine to form the coupled system dynamics: namely, catchment hydrology, population, economics, environment, socioeconomic sensitivity and collective response. The conceptual framework posits two novel constructs: (i) a composite socioeconomic driving variable, termed the Community Sensitivity state variable, which seeks to capture the perceived level of threat to a community's quality of life, and acts as a key link tying together one of the fundamental feedback loops of the coupled system, and (ii) a Behavioural Response variable as the observable feedback mechanism, which reflects land and water management decisions relevant to the hydrological context. The framework makes a further contribution through the introduction of three macro-scale parameters that enable it to normalise for differences in climate, socioeconomic and political gradients across study sites. In this way, the framework provides for both macro-scale contextual parameters, which allow for comparative studies to be undertaken, and catchment-specific conditions, by way of tailored "closure rel