WorldWideScience

Sample records for model results based

  1. Stress Resultant Based Elasto-Viscoplastic Thick Shell Model

    Directory of Open Access Journals (Sweden)

    Pawel Woelke

    2012-01-01

    Full Text Available The current paper presents enhancement introduced to the elasto-viscoplastic shell formulation, which serves as a theoretical base for the finite element code EPSA (Elasto-Plastic Shell Analysis [1–3]. The shell equations used in EPSA are modified to account for transverse shear deformation, which is important in the analysis of thick plates and shells, as well as composite laminates. Transverse shear forces calculated from transverse shear strains are introduced into a rate-dependent yield function, which is similar to Iliushin's yield surface expressed in terms of stress resultants and stress couples [12]. The hardening rule defined by Bieniek and Funaro [4], which allows for representation of the Bauschinger effect on a moment-curvature plane, was previously adopted in EPSA and is used here in the same form. Viscoplastic strain rates are calculated, taking into account the transverse shears. Only non-layered shells are considered in this work.

  2. The Evaluation Model About the Result of Enterprise Technological Innovation Based on DAGF Algorithm

    Institute of Scientific and Technical Information of China (English)

    LikeMao; ZigangZhang

    2004-01-01

    Based on DAGF Algorithm, an evaluation model about the result of enterprise's technological innovation is proposed. Furthermore, establishment of its system of evaluation indicators and DAGF Algorithm are discussed in detail. Besides, the result of the case shows that the model is fit for evaluation of the result of enterprise's technological innovation.

  3. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    Science.gov (United States)

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  4. Evaluation model for the implementation results of mine law based on neural network

    Science.gov (United States)

    Gu, Tao; Li, Xu

    2010-04-01

    To evaluate the implementation results of mine safety production law, the evaluation model based on neural network is presented. In this model, 63 indicators which can describe the mine law effectively are proposed. The evaluation system is developed by using the model and the 63 indicators. The evaluation results show that the proposed method has high accuracy. We can effectively estimate the score of one mine for its carrying out the safety law. The estimate results are of scientific credibility and impartiality.

  5. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  6. A Tower Model for Lightning Overvoltage Studies Based on the Result of an FDTD Simulation

    Science.gov (United States)

    Noda, Taku

    This paper describes a method for deriving a transmission tower model for EMTP lightning overvoltage studies from a numerical electromagnetic simulation result obtained by the FDTD (Finite Difference Time Domain) method. The FDTD simulation carried out in this paper takes into account the following items which have been ignored or over-simplified in previously-presented simulations: (i) resistivity of the ground soil; (ii) arms, major slant elements, and foundations of the tower; (iii) development speed of the lightning return stroke. For validation purpose a pulse test of a 500-kV transmission tower is simulated, and a comparison with the measured result shows that the present FDTD simulation gives a sufficiently accurate result. Using this validated FDTD-based simulation method the insulator-string voltages of a tower for a lightning stroke are calculated, and based on the simulation result the parameter values of the proposed tower model for EMTP studies are determined in a systematic way. Since previously-presented models include trial-and-error process in the parameter determination, it can be said that the proposed model is more general in this regard. As an illustrative example, the 500-kV transmission tower mentioned above is modeled, and it is shown that the derived model closely reproduces the FDTD simulation result.

  7. Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling

    Science.gov (United States)

    Deng, F.; Chen, J.; Peters, W.; Krol, M.

    2008-12-01

    Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).

  8. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  9. Model-Based Reasoning in the Physics Laboratory: Framework and Initial Results

    Science.gov (United States)

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-01-01

    We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable…

  10. Fugacity based modeling for pollutant fate and transport during floods. Preliminary results

    Science.gov (United States)

    Deda, M.; Fiorini, M.; Massabo, M.; Rudari, R.

    2010-09-01

    Fugacity based modeling for pollutant fate and transport during floods. Preliminary results Miranda Deda, Mattia Fiorini, Marco Massabò, Roberto Rudari One of the concerns that arises during floods is whether the wide-spreading of chemical contamination is associated with the flooding. Many potential sources of toxics releases during floods exists in cities or rural area; hydrocarbons fuel storage system, distribution facilities, commercial chemical storage, sewerage system are only few examples. When inundated homes and vehicles can also be source of toxics contaminants such as gasoline/diesel, detergents and sewage. Hazardous substances released into the environment are transported and dispersed in complex environmental systems that include air, plant, soil, water and sediment. Effective environmental models demand holistic modelling of the transport and transformation of the materials in the multimedia arena. Among these models, fugacity-based models are distribution based models incorporating all environmental compartments and are based on steady-state fluxes of pollutants across compartment interfaces (Mackay "Multimedia Environmental Models" 2001). They satisfy the primary objective of environmental chemistry which is to forecast the concentrations of pollutants in the environments with respect to space and time variables. Multimedia fugacity based-models has been used to assess contaminant distribution at very different spatial and temporal scales. The applications range from contaminant leaching to groundwater, runoff to surface water, partitioning in lakes and streams, distribution at regional and even global scale. We developped a two-dimensional fugacity based model for fate and transport of chemicals during floods. The model has three modules: the first module estimates toxins emission rates during floods; the second modules is the hydrodynamic model that simulates the water flood and the third module simulate the dynamic distribution of chemicals in

  11. Modelling Inter-Particle Forces and Resulting Agglomerate Sizes in Cement-Based Materials

    DEFF Research Database (Denmark)

    Kjeldsen, Ane Mette; Geiker, Mette Rica

    2005-01-01

    The theory of inter-particle forces versus external shear in cement-based materials is reviewed. On this basis, calculations on maximum agglomerate size present after the combined action of superplasticizers and shear are carried out. Qualitative experimental results indicate that external shear...... affects the particle size distribution of Mg(OH)2 (used as model material) as well as silica, whereas the addition of superplasticizers affects only the smallest particles in cement and thus primarily acts as water reducers and not dispersers....

  12. Non-linear spacecraft component parameters identification based on experimental results and finite element modelling

    Science.gov (United States)

    Vismara, S. O.; Ricci, S.; Bellini, M.; Trittoni, L.

    2016-06-01

    The objective of the present paper is to describe a procedure to identify and model the non-linear behaviour of structural elements. The procedure herein applied can be divided into two main steps: the system identification and the finite element model updating. The application of the restoring force surface method as a strategy to characterize and identify localized non-linearities has been investigated. This method, which works in the time domain, has been chosen because it has `built-in' characterization capabilities, it allows a direct non-parametric identification of non-linear single-degree-of-freedom systems and it can easily deal with sine-sweep excitations. Two different application examples are reported. At first, a numerical test case has been carried out to investigate the modelling techniques in the case of non-linear behaviour based on the presence of a free-play in the model. The second example concerns the flap of the Intermediate eXperimental Vehicle that successfully completed its 100-min mission on 11 February 2015. The flap was developed under the responsibility of Thales Alenia Space Italia, the prime contractor, which provided the experimental data needed to accomplish the investigation. The procedure here presented has been applied to the results of modal testing performed on the article. Once the non-linear parameters were identified, they were used to update the finite element model in order to prove its capability of predicting the flap behaviour for different load levels.

  13. Encouraging Sustainable Transport Choices in American Households: Results from an Empirically Grounded Agent-Based Model

    Directory of Open Access Journals (Sweden)

    Davide Natalini

    2013-12-01

    Full Text Available The transport sector needs to go through an extended process of decarbonisation to counter the threat of climate change. Unfortunately, the International Energy Agency forecasts an enormous growth in the number of cars and greenhouse gas emissions by 2050. Two issues can thus be identified: (1 the need for a new methodology that could evaluate the policy performances ex-ante and (2 the need for more effective policies. To help address these issues, we developed an Agent-Based Model called Mobility USA aimed at: (1 testing whether this could be an effective approach in analysing ex-ante policy implementation in the transport sector; and (2 evaluating the effects of alternative policy scenarios on commuting behaviours in the USA. Particularly, we tested the effects of two sets of policies, namely market-based and preference-change ones. The model results suggest that this type of agent-based approach will provide a useful tool for testing policy interventions and their effectiveness.

  14. Model-Based Reasoning in the Upper-Division Physics Laboratory: Framework and Initial Results

    CERN Document Server

    Zwickl, Benjamin M; Finkelstein, Noah; Lewandowski, H J

    2014-01-01

    Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). We review and extend existing frameworks on modeling to develop a new framework that more naturally describes model-based reasoning in upper-division physics labs. A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to document examples of model-based reasoning in the laboratory and refine the modeling framework. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of mod...

  15. Model-based reasoning in the physics laboratory: Framework and initial results

    Science.gov (United States)

    Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to refine the framework and demonstrate its utility by documenting examples of model-based reasoning in the laboratory. When applied to the think-aloud interviews, the framework captures and differentiates students' model-based reasoning and helps identify areas of future research. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of model limitations, and revision. Finally, we document students' challenges in explicitly articulating assumptions when constructing models of experimental systems and further challenges in model construction due to students' insufficient prior conceptual understanding. A modeling perspective reframes many of the seemingly arbitrary technical details of measurement tools and apparatus as an opportunity for authentic and engaging scientific sense making.

  16. Structural and vibrational study of graphene oxide via coronene based models: theoretical and experimental results

    Science.gov (United States)

    Almeida de Mendonça, João Paulo; Henrique de Lima, Alessandro; Amaral Junqueira, Georgia Maria; Gianini Quirino, Welber; Legnani, Cristiano; Oliveira Maciel, Indhira; Sato, Fernando

    2016-05-01

    We use the Coronene (C24H12), a simple and finite molecule, to make a model to study the spectroscopic and structural alterations generated by oxygenated groups in graphene oxide (GO). Based on the Lerf-Klinowski model, we chose the hydroxyl [OH-], the carboxyl [COOH-] and the epoxy [the ring C2O inside the molecule] as our radicals of interest and study their collective and isolated effects. We perform geometry optimization, vibrational IR (via AM1 and DFT-B3LYP) and Raman spectra (via DFT-B3LYP) of a series of functionalized coronene molecules. As results, we obtain some useful data for the analysis of IR and Raman spectra of GO, which facilitate the understanding and identification of the peaks found in the experiment. Finally, we suggest a new model to study GO, producing an accurate signature when compared to our experimental data. Such molecule shows in more details of the structural effects caused by functionalization when compared to experimental data.

  17. Combination HIV prevention among MSM in South Africa: results from agent-based modeling.

    Directory of Open Access Journals (Sweden)

    Ron Brookmeyer

    Full Text Available HIV prevention trials have demonstrated the effectiveness of a number of behavioral and biomedical interventions. HIV prevention packages are combinations of interventions and offer potential to significantly increase the effectiveness of any single intervention. Estimates of the effectiveness of prevention packages are important for guiding the development of prevention strategies and for characterizing effect sizes before embarking on large scale trials. Unfortunately, most research to date has focused on testing single interventions rather than HIV prevention packages. Here we report the results from agent-based modeling of the effectiveness of HIV prevention packages for men who have sex with men (MSM in South Africa. We consider packages consisting of four components: antiretroviral therapy for HIV infected persons with CD4 count <350; PrEP for high risk uninfected persons; behavioral interventions to reduce rates of unprotected anal intercourse (UAI; and campaigns to increase HIV testing. We considered 163 HIV prevention packages corresponding to different intensity levels of the four components. We performed 2252 simulation runs of our agent-based model to evaluate those packages. We found that a four component package consisting of a 15% reduction in the rate of UAI, 50% PrEP coverage of high risk uninfected persons, 50% reduction in persons who never test for HIV, and 50% ART coverage over and above persons already receiving ART at baseline, could prevent 33.9% of infections over 5 years (95% confidence interval, 31.5, 36.3. The package components with the largest incremental prevention effects were UAI reduction and PrEP coverage. The impact of increased HIV testing was magnified in the presence of PrEP. We find that HIV prevention packages that include both behavioral and biomedical components can in combination prevent significant numbers of infections with levels of coverage, acceptance and adherence that are potentially achievable

  18. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    Directory of Open Access Journals (Sweden)

    Christopher Heine

    2014-08-01

    Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.

  19. Experimental and modelling results of a parallel-plate based active magnetic regenerator

    DEFF Research Database (Denmark)

    Tura, A.; Nielsen, Kaspar Kirstein; Rowe, A.

    2012-01-01

    The performance of a permanent magnet magnetic refrigerator (PMMR) using gadolinium parallel plates is described. The configuration and operating parameters are described in detail. Experimental results are compared to simulations using an established twodimensional model of an active magnetic...

  20. Mechanistic-empirical subgrade design model based on heavy vehicle simulator test results

    CSIR Research Space (South Africa)

    Theyse, HL

    2006-06-01

    Full Text Available -empirical design models. This paper presents a study on subgrade permanent deformation based on the data generated from a series of Heavy Vehicle Simulator (HVS) tests done at the Richmond Field Station in California. The total subgrade deflection was found to be a...

  1. Base cation deposition in Europe - Part I. Model description, results and uncertainties

    NARCIS (Netherlands)

    Draaijers, G.P.J.; Leeuwen, E.P. van; Jong, P.G.H. de; Erisman, J.W.

    1997-01-01

    Deposition of base cations (Na+, Mg2+, Ca2+, K+) in Europe was mapped for 1989 with a spatial resolution of 10 x 20 km using the so-called inferential modeling technique. Deposition fields resembled the geographic variability of sources, land-use and climate. Dry deposition constituted on average 45

  2. Atmospheric greenhouse gases retrieved from SCIAMACHY: comparison to ground-based FTS measurements and model results

    Directory of Open Access Journals (Sweden)

    O. Schneising

    2012-02-01

    Full Text Available SCIAMACHY onboard ENVISAT (launched in 2002 enables the retrieval of global long-term column-averaged dry air mole fractions of the two most important anthropogenic greenhouse gases carbon dioxide and methane (denoted XCO2 and XCH4. In order to assess the quality of the greenhouse gas data obtained with the recently introduced v2 of the scientific retrieval algorithm WFM-DOAS, we present validations with ground-based Fourier Transform Spectrometer (FTS measurements and comparisons with model results at eight Total Carbon Column Observing Network (TCCON sites providing realistic error estimates of the satellite data. Such validation is a prerequisite to assess the suitability of data sets for their use in inverse modelling.

    It is shown that there are generally no significant differences between the carbon dioxide annual increases of SCIAMACHY and the assimilation system CarbonTracker (2.00 ± 0.16 ppm yr−1 compared to 1.94 ± 0.03 ppm yr−1 on global average. The XCO2 seasonal cycle amplitudes derived from SCIAMACHY are typically larger than those from TCCON which are in turn larger than those from CarbonTracker. The absolute values of the northern hemispheric TCCON seasonal cycle amplitudes are closer to SCIAMACHY than to CarbonTracker and the corresponding differences are not significant when compared with SCIAMACHY, whereas they can be significant for a subset of the analysed TCCON sites when compared with CarbonTracker. At Darwin we find discrepancies of the seasonal cycle derived from SCIAMACHY compared to the other data sets which can probably be ascribed to occurrences of undetected thin clouds. Based on the comparison with the reference data, we conclude that the carbon dioxide data set can be characterised by a regional relative precision (mean standard deviation of the differences of about 2.2 ppm and a relative accuracy (standard deviation of the mean differences

  3. Comparative Results on 3D Navigation of Quadrotor using two Nonlinear Model based Controllers

    Science.gov (United States)

    Bouzid, Y.; Siguerdidjane, H.; Bestaoui, Y.

    2017-01-01

    Recently the quadrotors are being increasingly employed in both military and civilian areas where a broad range of nonlinear flight control techniques are successfully implemented. With this advancement, it has become necessary to investigate the efficiency of these flight controllers by studying theirs features and compare their performance. In this paper, the control of Unmanned Aerial Vehicle (UAV) quadrotor, using two different approaches, is presented. The first controller is Nonlinear PID (NLPID) whilst the second one is Nonlinear Internal Model Control (NLIMC) that are used for the stabilization as well as for the 3D trajectory tracking. The numerical simulations have shown satisfactory results using nominal system model or disturbed model for both of them. The obtained results are analyzed with respect to several criteria for the sake of comparison.

  4. A new model based on experimental results for the thermal characterization of bricks

    Energy Technology Data Exchange (ETDEWEB)

    Vivancos, Jose-Luis [Instituto de Quimica Molecular Aplicada, Universidad Politecnica de Valencia, Camino de Vera S/N, 46022 Valencia (Spain)]|[Departamento de Proyectos de Ingenieria, Universidad Politecnica de Valencia, Camino de Vera S/N, 46022 Valencia (Spain); Soto, Juan; Ros-Lis, Jose V.; Martinez-Manez, Ramon [Instituto de Quimica Molecular Aplicada, Universidad Politecnica de Valencia, Camino de Vera S/N, 46022 Valencia (Spain)]|[Departamento de Quimica, Universidad Politecnica de Valencia, Camino de Vera S/N, 46022 Valencia (Spain); Perez, Israel [Casas Bioclimaticas. C/La Paz, 17, 46003 Valencia (Spain)

    2009-05-15

    The development of fast and reliable protocols to determine the characteristics of building materials is of importance in order to develop environmentally friendly houses with an efficient energy design. In this article heat flux evolution on different types of clay and concrete bricks has been studied using a guarded hot-plate. The studied bricks were purchased from local commercially available sources and included a solid face brick and a range of honeycombed and perforated bricks. From the data collected a new model to study heat flux is proposed. This model is based on the shape of the typical sigmoidal curves observed for the time dependent heat flux evolution. The model allows the calculation of the thermal resistance (R) and the heat flux in the steady-state ({phi}{sub {infinity}}). The model also calculates two new parameters, t{sub B} and {tau}{sub B}. t{sub B} represents the time at which half {phi}{sub {infinity}} is attained. This parameter (t{sub B}) has additionally been found to be dependent on the thermal diffusivity and the geometric characteristics of the brick. (author)

  5. DEM-based Modeling at the Hillslope Scale: Recent Results and Future Process Research Needs

    Science.gov (United States)

    McDonnell, J.; Coles, A.; Gabrielli, C. P.; Appels, W. M.; Ameli, A.

    2015-12-01

    Hillslope scale patterns of overland flow, infiltration, subsurface stormflow and groundwater recharge are all topographically mediated. However, the mechanisms by which macro-, meso- and micro-topographies control filling and spilling of lateral flow, and vertical infiltration, are still poorly understood. Here we present high-resolution DEMs derived from ground-based LiDAR, airborne LiDAR, and GPR (ground penetrating rebar!) with model analysis to examine the topographic controls on water flow at three distinct hillslopes. We explore surface topographic effects on rainfall- and snowmelt-infiltration and overland flow on the Canadian Prairies; the surface and subsurface topographic controls on lateral subsurface stormflow generation and groundwater recharge at a steep, wet temperate rainforest in New Zealand; and subsurface topographic controls on patterns of groundwater recharge at a forested hillslope on the Georgia Piedmont in the United States. We demonstrate how these studies reveal future research needs for improving DEM-based watershed delineation and modeling along with some surprising similarities between topographic controls on soil surface infiltration and overland flow and twin subsurface processes at the soil-bedrock interface.

  6. MAIN REGULARITIES OF FAULTING IN LITHOSPHERE AND THEIR APPLICATION (BASED ON PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2015-09-01

    Full Text Available Results of long-term experimental studies and modelling of faulting are briefly reviewed, and research methods and the-state-of-art issues are described. The article presents the main results of faulting modelling with the use of non-transparent elasto-viscous plastic and optically active models. An area of active dynamic influence of fault (AADIF is the term introduced to characterise a fault as a 3D geological body. It is shown that AADIF's width (М is determined by thickness of the layer wherein a fault occurs (Н, its viscosity (η and strain rate (V. Multiple correlation equations are proposed to show relationships between AADIF's width (М, H, η and V for faults of various morphological and genetic types. The irregularity of AADIF in time and space is characterised in view of staged formation of the internal fault structure of such areas and geometric and dynamic parameters of AADIF which are changeable along the fault strike. The authors pioneered in application of the open system conception to find explanations of regularities of structure formation in AADIFs. It is shown that faulting is a synergistic process of continuous changes of structural levels of strain, which differ in manifestation of specific self-similar fractures of various scales. Such levels are changeable due to self-organization processes of fracture systems. Fracture dissipative structures (FDS is the term introduced to describe systems of fractures that are subject to self-organization. It is proposed to consider informational entropy and fractal dimensions in order to reveal FDS in AADIF. Studied are relationships between structure formation in AADIF and accompanying processes, such as acoustic emission and terrain development above zones wherein faulting takes place. Optically active elastic models were designed to simulate the stress-and-strain state of AADIF of main standard types of fault jointing zones and their analogues in nature, and modelling results are

  7. An animal model of schizophrenia based on chronic LSD administration: old idea, new results.

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D; Nichols, David E

    2011-09-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as "clearly a paranoid state." We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT(2A) receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Spreading of intolerance under economic stress: Results from a reputation-based model

    Science.gov (United States)

    Martinez-Vaquero, Luis A.; Cuesta, José A.

    2014-08-01

    When a population is engaged in successive prisoner's dilemmas, indirect reciprocity through reputation fosters cooperation through the emergence of moral and action rules. A simplified model has recently been proposed where individuals choose between helping others or not and are judged good or bad for it by the rest of the population. The reputation so acquired will condition future actions. In this model, eight strategies (referred to as "leading eight") enforce a high level of cooperation, generate high payoffs, and are therefore resistant to invasions by other strategies. Here we show that, by assigning each individual one of two labels that peers can distinguish (e.g., political ideas, religion, and skin color) and allowing moral and action rules to depend on the label, intolerant behaviors can emerge within minorities under sufficient economic stress. We analyze the sets of conditions where this can happen and also discuss the circumstances under which tolerance can be restored. Our results agree with empirical observations that correlate intolerance and economic stress and predict a correlation between the degree of tolerance of a population and its composition and ethical stance.

  9. Poiseuille, thermal creep and Couette flow: results based on the CES model of the linearized Boltzmann equation

    Energy Technology Data Exchange (ETDEWEB)

    Siewert, C.E. [North Carolina State Univ., Dept. Mathematics, Raleigh, NC (United States)

    2002-10-01

    A synthetic-kernel model (CES model) of the linearized Boltzmann equation is used along with an analytical discrete-ordinates method (ADO) to solve three fundamental problems concerning flow of a rarefied gas in a plane channel. More specifically, the problems of Couette flow, Poiseuille flow and thermal-creep flow are solved in terms of the CES model equation for an arbitrary mixture of specular and diffuse reflection at the walls confining the flow, and numerical results for the basic quantities of interest are reported. The comparisons made with results derived from solutions based on computationally intensive methods applied to the linearized Boltzmann equation are used to conclude that the CES model can be employed with confidence to improve the accuracy of results available from simpler approximations such as the BGK model or the S model. (author)

  10. Analysis of inelastic neutron scattering results on model compounds of the nitrogenous bases of the nucleotides

    Indian Academy of Sciences (India)

    J Tomkinson

    2008-10-01

    The role that model compounds can play in understanding the vibrational eigenvectors of molecules is discussed. Assigning the spectra of model compounds is of particular importance and the individual-scaling approach, that has been used with isolated molecule ab-initio calculations, is outlined. Special emphasis is given to recent work on assigning the spectra of three 5-6 heterobicyclic systems; indole, benzimidazole and isatin.

  11. Differential hardening in IF steel - Experimental results and a crystal plasticity based model

    NARCIS (Netherlands)

    Mulder, J.; Eyckens, P.; van den Boogaard, Antonius H.; Hora, P.

    2015-01-01

    Work hardening in metals is commonly described by isotropic hardening, especially for monotonically increasing proportional loading. The relation between different stress states in this case is determined by equivalent stress and strain definitions, based on equal plastic dissipation. However,

  12. Combustion synthesis of TiB2-based cermets: modeling and experimental results

    NARCIS (Netherlands)

    Martinez Pacheco, M.; Bouma, R.H.B.; Katgerman, L.

    2008-01-01

    TiB2-based cermets are prepared by combustion synthesis followed by a pressing stage in a granulate medium. Products obtained by combustion synthesis are characterized by a large remaining porosity (typically 50%). To produce dense cermets, a subsequent densification step is performed after the comb

  13. Identifying plausible genetic models based on association and linkage results: application to type 2 diabetes.

    Science.gov (United States)

    Guan, Weihua; Boehnke, Michael; Pluzhnikov, Anna; Cox, Nancy J; Scott, Laura J

    2012-12-01

    When planning resequencing studies for complex diseases, previous association and linkage studies can constrain the range of plausible genetic models for a given locus. Here, we explore the combinations of causal risk allele frequency (RAFC ) and genotype relative risk (GRRC ) consistent with no or limited evidence for affected sibling pair (ASP) linkage and strong evidence for case-control association. We find that significant evidence for case-control association combined with no or moderate evidence for ASP linkage can define a lower bound for the plausible RAFC . Using data from large type 2 diabetes (T2D) linkage and genome-wide association study meta-analyses, we find that under reasonable model assumptions, 23 of 36 autosomal T2D risk loci are unlikely to be due to causal variants with combined RAFC < 0.005, and four of the 23 are unlikely to be due to causal variants with combined RAFC < 0.05.

  14. Preliminary Modelling Results for an Otto Cycle/Stirling Cycle Hybrid-engine-based Power Generation System

    OpenAIRE

    Cullen, Barry; McGovern, Jim; Feidt, Michel; Petrescu, Stoian

    2009-01-01

    This paper presents preliminary data and results for a system mathematical model for a proposed Otto Cycle / Stirling Cycle hybrid-engine-based power generation system. The system is a combined cycle system with the Stirling cycle machine operating as a bottoming cycle on the Otto cycle exhaust. The application considered is that of a stationary power generation scenario wherein the Stirling cycle engine operates as a waste heat recovery device on the exhaust stream of the Otto cycle engine. ...

  15. An Investigation Of The Influence Of Leadership And Processes On Basic Performance Results Using A Decision Model Based On Efqm

    Directory of Open Access Journals (Sweden)

    Ahmet Talat İnan

    2013-06-01

    Full Text Available EFQM Excellence Model is a quality approach that companies benefit in achieving success. EFQM Excellence Model is an assessment tool helping to determine what is competence and missing aspects in achieving excellence.In this study, based on the EFQM Excellence Model, the influence of basic performance results caused by leadership and processes variables in this model of a firm engaged in maintenance and repair services due to a large-scale company. In this work, a survey was conducted that covering the company's employees and managers. The data obtained from this survey was utilized by using SPSS16.0 statistics software in respect of factor analysis, reliability analysis, correlation and regression analysis. The relation between variables was evaluated taking into account the resuşts of analysis.

  16. SAT-MAP-CLIMATE project results[SATellite base bio-geophysical parameter MAPping and aggregation modelling for CLIMATE models

    Energy Technology Data Exchange (ETDEWEB)

    Bay Hasager, C.; Woetmann Nielsen, N.; Soegaard, H.; Boegh, E.; Hesselbjerg Christensen, J.; Jensen, N.O.; Schultz Rasmussen, M.; Astrup, P.; Dellwik, E.

    2002-08-01

    Earth Observation (EO) data from imaging satellites are analysed with respect to albedo, land and sea surface temperatures, land cover types and vegetation parameters such as the Normalized Difference Vegetation Index (NDVI) and the leaf area index (LAI). The observed parameters are used in the DMI-HIRLAM-D05 weather prediction model in order to improve the forecasting. The effect of introducing actual sea surface temperatures from NOAA AVHHR compared to climatological mean values, shows a more pronounced land-sea breeze effect which is also observable in field observations. The albedo maps from NOAA AVHRR are rather similar to the climatological mean values so for the HIRLAM model this is insignicant, yet most likely of some importance in the HIRHAM regional climate model. Land cover type maps are assigned local roughness values determined from meteorological field observations. Only maps with a spatial resolution around 25 m can adequately map the roughness variations of the typical patch size distribution in Denmark. A roughness map covering Denmark is aggregated (ie area-average non-linearly) by a microscale aggregation model that takes the non-linear turbulent responses of each roughness step change between patches in an arbitrary pattern into account. The effective roughnesses are calculated into a 15 km by 15 km grid for the HIRLAM model. The effect of hedgerows is included as an added roughness effect as a function of hedge density mapped from a digital vector map. Introducing the new effective roughness maps into the HIRLAM model appears to remedy on the seasonal wind speed bias over land and sea in spring. A new parameterisation on the effective roughness for scalar surface fluxes is developed and tested on synthetic data. Further is a method for the estimation the evapotranspiration from albedo, surface temperatures and NDVI succesfully compared to field observations. The HIRLAM predictions of water vapour at 12 GMT are used for atmospheric correction of

  17. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  18. Effects of Problem-Based Learning Model versus Expository Model and Motivation to Achieve for Student's Physic Learning Result of Senior High School at Class XI

    Science.gov (United States)

    Prayekti

    2016-01-01

    "Problem-based learning" (PBL) is one of an innovative learning model which can provide an active learning to student, include the motivation to achieve showed by student when the learning is in progress. This research is aimed to know: (1) differences of physic learning result for student group which taught by PBL versus expository…

  19. Quantum-Mechanical QSPR Models for Polymerization Volume Change of Epoxides and Methacrylates Based on Mercury Dilatometry Results

    OpenAIRE

    Miller, Matthew D.; Holder, Andrew J.; Kilway, Kathleen V.; Giese, Gregory J.; Finley, Jason E.; Travis, DeAnna M.; Iwai, Benjamin T.; Eick, J. David

    2006-01-01

    Polymerization volume change (PVC) was measured systematically using mercury dilatometry for 41 epoxide and methacrylate monomers with quartz filler. Quantitative structure property relationship (QSPR) models were developed based on this previously unreported data to gain insight in the data collection method for future models. Successful models included only data from those samples which polymerized to hardness. The most significant descriptors in these models related to monomer reactivity. ...

  20. Using Evidence Based Practice in LIS Education: Results of a Test of a Communities of Practice Model

    Directory of Open Access Journals (Sweden)

    Joyce Yukawa

    2010-03-01

    Full Text Available Objective ‐ This study investigated the use of a communities of practice (CoP model for blended learning in library and information science (LIS graduate courses. The purposes were to: (1 test the model’s efficacy in supporting student growth related to core LIS concepts, practices, professional identity, and leadership skills, and (2 develop methods for formative and summative assessment using the model.Methods ‐ Using design‐based research principles to guide the formative and summative assessments, pre‐, mid‐, and post‐course questionnaires were constructed to test the model and administered to students in three LIS courses taught by the author. Participation was voluntary and anonymous. A total of 34 students completed the three courses; response rate for the questionnaires ranged from 47% to 95%. The pre‐course questionnaire addressed attitudes toward technology and the use of technology for learning. The mid‐course questionnaire addressed strengths and weaknesses of the course and suggestions for improvement. The post‐course questionnaire addressed what students valued about their learning and any changes in attitude toward technology for learning. Data were analyzed on three levels. Micro‐level analysis addressed technological factors related to usability and participant skills and attitudes. Meso‐level analysis addressed social and pedagogical factors influencing community learning. Macro‐level analysis addressed CoP learning outcomes, namely, knowledge of core concepts and practices, and the development of professional identity and leadership skills.Results ‐ The students can be characterized as adult learners who were neither early nor late adopters of technology. At the micro‐level, responses indicate that the online tools met high standards of usability and effectively supported online communication and learning. Moreover, the increase in positive attitudes toward the use of technology for learning at

  1. Model-theoretic Optimization Approach to Triathlon Performance Under Comparative Static Conditions – Results Based on The Olympic Games 2012

    Directory of Open Access Journals (Sweden)

    Michael Fröhlich

    2013-10-01

    Full Text Available In Olympic-distance triathlon, time minimization is the goal in all three disciplines and the two transitions. Running is the key to winning, whereas swimming and cycling performance are less significantly associated with overall competition time. A comparative static simulation calculation based on the individual times of each discipline was done. Furthermore, the share of the discipline in the total time proved that increasing the scope of running training results in an additional performance development. Looking at the current development in triathlon and taking the Olympic Games in London 2012 as an initial basis for model-theoretic simulations of performance development, the first fact that attracts attention is that running becomes more and more the crucial variable in terms of winning a triathlon. Run times below 29:00 minutes in Olympic-distance triathlon will be decisive for winning. Currently, cycle training time is definitely overrepresented. The share of swimming is considered optimal.

  2. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  3. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  4. Channels Coordination Game Model Based on Result Fairness Preference and Reciprocal Fairness Preference: A Behavior Game Forecasting and Analysis Method

    National Research Council Canada - National Science Library

    Chuan Ding; Kaihong Wang; Xiaoying Huang

    2014-01-01

    .... Using the behavior game theory model we established, we can prove that if retailers only consider the result fairness preference and they are not jealous of manufacturers' benefit, manufacturers will...

  5. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  6. S-2 stage 1/25 scale model base region thermal environment test. Volume 1: Test results, comparison with theory and flight data

    Science.gov (United States)

    Sadunas, J. A.; French, E. P.; Sexton, H.

    1973-01-01

    A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.

  7. Simulated crop yield in response to changes in climate and agricultural practices: results from a simple process based model

    Science.gov (United States)

    Caldararu, S.; Smith, M. J.; Purves, D.; Emmott, S.

    2013-12-01

    Global agriculture will, in the future, be faced with two main challenges: climate change and an increase in global food demand driven by an increase in population and changes in consumption habits. To be able to predict both the impacts of changes in climate on crop yields and the changes in agricultural practices necessary to respond to such impacts we currently need to improve our understanding of crop responses to climate and the predictive capability of our models. Ideally, what we would have at our disposal is a modelling tool which, given certain climatic conditions and agricultural practices, can predict the growth pattern and final yield of any of the major crops across the globe. We present a simple, process-based crop growth model based on the assumption that plants allocate above- and below-ground biomass to maintain overall carbon optimality and that, to maintain this optimality, the reproductive stage begins at peak nitrogen uptake. The model includes responses to available light, water, temperature and carbon dioxide concentration as well as nitrogen fertilisation and irrigation. The model is data constrained at two sites, the Yaqui Valley, Mexico for wheat and the Southern Great Plains flux site for maize and soybean, using a robust combination of space-based vegetation data (including data from the MODIS and Landsat TM and ETM+ instruments), as well as ground-based biomass and yield measurements. We show a number of climate response scenarios, including increases in temperature and carbon dioxide concentrations as well as responses to irrigation and fertiliser application.

  8. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

    2009-11-30

    Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

  9. Vaccination rules for a true-mass action SEIR epidemic model based on an observer synthesis. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    This paper presents a simple continuous-time linear vaccination-based control strategy for a SEIR (susceptible plus infected plus infectious plus removed populations) propagation disease model. The model takes into account the total population amounts as a refrain for the illness transmission since its increase makes more difficult contacts among susceptible and infected. The control objective is the asymptotically tracking of the removed-byimmunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically converge to zero. A state observer is used to estimate the true various partial populations of susceptible, infected, infectious and immune which are assumed to be unknown. The model parameters are also assumed to be, in general, unknown. In this case, the parameters are replaced by available estimates to implement the vaccination action.

  10. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    Energy Technology Data Exchange (ETDEWEB)

    Bordogna, Clelia Maria [Instituto de Investigaciones FisicoquImicas Teoricas y Aplicadas (INIFTA), UNLP, CONICET Casilla de Correo 16, Sucursal 4 (1900) La Plata (Argentina); Albano, Ezequiel V [Instituto de Investigaciones FisicoquImicas Teoricas y Aplicadas (INIFTA), UNLP, CONICET Casilla de Correo 16, Sucursal 4 (1900) La Plata (Argentina)

    2007-02-14

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.

  11. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  12. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Science.gov (United States)

    Gutiérrez, Alvaro G; Armesto, Juan J; Díaz, M Francisca; Huth, Andreas

    2014-01-01

    Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  13. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Directory of Open Access Journals (Sweden)

    Alvaro G Gutiérrez

    Full Text Available Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S. The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area. We compared the responses of a young stand (YS, ca. 60 years-old and an old-growth forest (OG, >500 years-old in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  14. Predictions of mortality from pleural mesothelioma in Italy: a model based on asbestos consumption figures supports results from age-period-cohort models.

    Science.gov (United States)

    Marinaccio, Alessandro; Montanaro, Fabio; Mastrantonio, Marina; Uccelli, Raffaella; Altavista, Pierluigi; Nesti, Massimo; Costantini, Adele Seniori; Gorini, Giuseppe

    2005-05-20

    Italy was the second main asbestos producer in Europe, after the Soviet Union, until the end of the 1980s, and raw asbestos was imported on a large scale until 1992. The Italian pattern of asbestos consumption lags on average about 10 years behind the United States, Australia, the United Kingdom and the Nordic countries. Measures to reduce exposure were introduced in the mid-1970s in some workplaces. In 1986, limitations were imposed on the use of crocidolite and in 1992 asbestos was definitively banned. We have used primary pleural cancer mortality figures (1970-1999) to predict mortality from mesothelioma among Italian men in the next 30 years by age-cohort-period models and by a model based on asbestos consumption figures. The pleural cancer/mesothelioma ratio and mesothelioma misdiagnosis in the past were taken into account in the analysis. Estimated risks of birth cohorts born after 1945 decrease less quickly in Italy than in other Western countries. The findings predict a peak with about 800 mesothelioma annual deaths in the period 2012-2024. Results estimated using age-period-cohort models were similar to those obtained from the asbestos consumption model.

  15. Spatial organization of mesenchymal stem cells in vitro--results from a new individual cell-based model with podia.

    Directory of Open Access Journals (Sweden)

    Martin Hoffmann

    Full Text Available Therapeutic application of mesenchymal stem cells (MSC requires their extensive in vitro expansion. MSC in culture typically grow to confluence within a few weeks. They show spindle-shaped fibroblastoid morphology and align to each other in characteristic spatial patterns at high cell density. We present an individual cell-based model (IBM that is able to quantitatively describe the spatio-temporal organization of MSC in culture. Our model substantially improves on previous models by explicitly representing cell podia and their dynamics. It employs podia-generated forces for cell movement and adjusts cell behavior in response to cell density. At the same time, it is simple enough to simulate thousands of cells with reasonable computational effort. Experimental sheep MSC cultures were monitored under standard conditions. Automated image analysis was used to determine the location and orientation of individual cells. Our simulations quantitatively reproduced the observed growth dynamics and cell-cell alignment assuming cell density-dependent proliferation, migration, and morphology. In addition to cell growth on plain substrates our model captured cell alignment on micro-structured surfaces. We propose a specific surface micro-structure that according to our simulations can substantially enlarge cell culture harvest. The 'tool box' of cell migratory behavior newly introduced in this study significantly enhances the bandwidth of IBM. Our approach is capable of accommodating individual cell behavior and collective cell dynamics of a variety of cell types and tissues in computational systems biology.

  16. Carbon export fluxes in the Southern Ocean: results from inverse modeling and comparison with satellite-based estimates

    Science.gov (United States)

    Schlitzer, Reiner

    The use of dissolved nutrients and carbon for photosynthesis in the euphotic zone and the subsequent downward transport of particulate and dissolved organic material strongly affect carbon concentrations in surface water and thus the air-sea exchange of CO 2. Efforts to quantify the downward carbon flux for the whole ocean or on basin-scales are hampered by the sparseness of direct productivity or flux measurements. Here, a global ocean circulation, biogeochemical model is used to determine rates of export production and vertical carbon fluxes in the Southern Ocean. The model exploits the existing large sets of hydrographic, oxygen, nutrient and carbon data that contain information on the underlying biogeochemical processes. The model is fitted to the data by systematically varying circulation, air-sea fluxes, production, and remineralization rates simultaneously. Use of the adjoint method yields model property simulations that are in very good agreement with measurements. In the model, the total integrated export flux of particulate organic matter necessary for the realistic reproduction of nutrient data is significantly larger than export estimates derived from primary productivity maps. Of the 10,000 TgC yr -1(10 GtC yr -1) required globally, the Southern Ocean south of 30°S contributes about 3000 TgC yr -1 (33%), most of it occurring in a zonal belt along the Antarctic Circumpolar Current and in the Peru, Chile and Namibia coastal upwelling regions. The export flux of POC for the area south of 50°S amounts to 1000±210 TgC yr -1, and the particle flux in 1000 m for the same area is 115±20 TgC yr -1. Unlike for the global ocean, the contribution of the downward flux of dissolved organic carbon is significant in the Southern Ocean in the top 500 m of the water column. Comparison with satellite-based productivity estimates (CZCS and SeaWiFS) shows a relatively good agreement over most of the ocean except for the Southern Ocean south of 50°S, where the model

  17. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network...... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....

  18. Result-Based Public Governance

    DEFF Research Database (Denmark)

    Boll, Karen

    the performance measure that guides the inspectors’ inspection (or nudging) of the businesses. The analysis shows that although a result-based governance system is advocated on a strategic level, performance measures which are not ‘result-based’ are developed and used in the daily coordination of work. The paper......Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution......-produced effects on individuals or businesses in society; effects which are often produced by ‘nudging’ the citizenry in a certain direction. With point of departure in these two governance-systems, the paper explores a case of controversial inspection of businesses’ negative VAT accounts and it describes...

  19. A cellular automaton based model simulating HVAC fluid and heat transport in a building. Modeling approach and comparison with experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Saiz, A. [Department of Applied Mathematics, Polytechnic University of Valencia, ETSGE School, Camino de Vera s/n, 46022 Valencia (Spain); Urchueguia, J.F. [Department of Applied Physics, Polytechnic University of Valencia, ETSII School, Camino de Vera s/n, 46022 Valencia (Spain); Martos, J. [Superior Technical School of Engineering, Department of Electronic Engineering, University of Valencia, Vicente Andres Estelles s/n, Burjassot 46100, Valencia (Spain)

    2010-09-15

    A discrete model characterizing heat and fluid flow in connection with thermal fluxes in a building is described and tested against experiment in this contribution. The model, based on a cellular automaton approach, relies on a set of a few quite simple rules and parameters in order to simulate the dynamic evolution of temperatures and energy flows in any water or brine based thermal energy distribution network in a building or system. Using an easy-to-record input, such as the instantaneous electrical power demand of the heating or cooling system, our model predicts time varying temperatures in characteristic spots and the related enthalpy flows whose simulation usually requires heavy computational tools and detailed knowledge of the network elements. As a particular example, we have applied our model to simulate an existing fan coil based hydronic heating system driven by a geothermal heat pump. When compared to the experimental temperature and thermal energy records, the outcome of the model coincides. (author)

  20. Calculation of limits for significant unidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2015-01-01

    concept on more than two results will increase the number of false-positive results. Therefore, a simple method is needed to interpret the significance of a difference when all available serial biomarker results are considered. METHODS: A computer simulation model using Excel was developed. Based on 10...

  1. DESIGN OF LOW CYTOTOXICITY DIARYLANILINE DERIVATIVES BASED ON QSAR RESULTS: AN APPLICATION OF ARTIFICIAL NEURAL NETWORK MODELLING

    Directory of Open Access Journals (Sweden)

    Ihsanul Arief

    2016-11-01

    Full Text Available Study on cytotoxicity of diarylaniline derivatives by using quantitative structure-activity relationship (QSAR has been done. The structures and cytotoxicities of  diarylaniline derivatives were obtained from the literature. Calculation of molecular and electronic parameters was conducted using Austin Model 1 (AM1, Parameterized Model 3 (PM3, Hartree-Fock (HF, and density functional theory (DFT methods.  Artificial neural networks (ANN analysis used to produce the best equation with configuration of input data-hidden node-output data = 5-8-1, value of r2 = 0.913; PRESS = 0.069. The best equation used to design and predict new diarylaniline derivatives.  The result shows that compound N1-(4′-Cyanophenyl-5-(4″-cyanovinyl-2″,6″-dimethyl-phenoxy-4-dimethylether benzene-1,2-diamine is the best-proposed compound with cytotoxicity value (CC50 of 93.037 μM.

  2. Blast-cooling of beef-in-sauce catering meals: numerical results based on a dynamic zero-order model

    Directory of Open Access Journals (Sweden)

    Jose A. Rabi

    2014-10-01

    Full Text Available Beef-in-sauce catering meals under blast-cooling have been investigated in a research project which aims at quantitative HACCP (hazard analysis critical control point. In view of its prospective coupling to a predictive microbiology model proposed in the project, zero-order spatial dependence has proved to suitably predict meal temperatures in response to temperature variations in the cooling air. This approach has modelled heat transfer rates via the a priori unknown convective coefficient hc which is allowed to vary due to uncertainty and variability in the actual modus operandi of the chosen case study hospital kitchen. Implemented in MS Excel®, the numerical procedure has successfully combined the 4th order Runge-Kutta method, to solve the governing equation, with non-linear optimization, via the built-in Solver, to determine the coefficient hc. In this work, the coefficient hc was assessed for 119 distinct recently-cooked meal samples whose temperature-time profiles were recorded in situ after 17 technical visits to the hospital kitchen over a year. The average value and standard deviation results were hc = 12.0 ± 4.1 W m-2 K-1, whilst the lowest values (associated with the worst cooling scenarios were about hc » 6.0 W m-2 K-1.

  3. Assessment of offshore wind power potential in the Aegean and Ionian Seas based on high-resolution hindcast model results

    Directory of Open Access Journals (Sweden)

    Takvor Soukissian

    2017-03-01

    Full Text Available In this study long-term wind data obtained from high-resolution hindcast simulations is used to analytically assess offshore wind power potential in the Aegean and Ionian Seas and provide wind climate and wind power potential characteristics at selected locations, where offshore wind farms are at the concept/planning phase. After ensuring the good model performance through detailed validation against buoy measurements, offshore wind speed and wind direction at 10 m above sea level are statistically analyzed on the annual and seasonal time scale. The spatial distribution of the mean wind speed and wind direction are provided in the appropriate time scales, along with the mean annual and the inter-annual variability; these statistical quantities are useful in the offshore wind energy sector as regards the preliminary identification of favorable sites for exploitation of offshore wind energy. Moreover, the offshore wind power potential and its variability are also estimated at 80 m height above sea level. The obtained results reveal that there are specific areas in the central and the eastern Aegean Sea that combine intense annual winds with low variability; the annual offshore wind power potential in these areas reach values close to 900 W/m2, suggesting that a detailed assessment of offshore wind energy would be worth noticing and could lead in attractive investments. Furthermore, as a rough estimate of the availability factor, the equiprobable contours of the event [4 m/s ≤ wind speed ≤ 25 m/s] are also estimated and presented. The selected lower and upper bounds of wind speed correspond to typical cut-in and cut-out wind speed thresholds, respectively, for commercial offshore wind turbines. Finally, for seven offshore wind farms that are at the concept/planning phase the main wind climate and wind power density characteristics are also provided.

  4. Results and Lessons Learned from a Coupled Social and Physical Hydrology Model: Testing Alternative Water Management Policies and Institutional Structures Using Agent-Based Modeling and Regional Hydrology

    Science.gov (United States)

    Murphy, J.; Lammers, R. B.; Prousevitch, A.; Ozik, J.; Altaweel, M.; Collier, N. T.; Kliskey, A. D.; Alessa, L.

    2015-12-01

    Water Management in the U.S. Southwest is under increasing scrutiny as many areas endure persistent drought. The impact of these prolonged dry conditions is a product of regional climate and hydrological conditions, but also of a highly engineered water management infrastructure and a complex web of social arrangements whereby water is allocated, shared, exchanged, used, re-used, and finally consumed. We coupled an agent-based model with a regional hydrological model to understand the dynamics in one richly studied and highly populous area: southern Arizona, U.S.A., including metropolitan Phoenix and Tucson. There, multiple management entities representing an array of municipalities and other water providers and customers, including private companies and Native American tribes are enmeshed in a complex legal and economic context in which water is bought, leased, banked, and exchanged in a variety of ways and on multiple temporal and physical scales. A recurrent question in the literature of adaptive management is the impact of management structure on overall system performance. To explore this, we constructed an agent-based model to capture this social complexity, and coupled this with a physical hydrological model that we used to drive the system under a variety of water stress scenarios and to assess the regional impact of the social system's performance. We report the outcomes of ensembles of runs in which varieties of alternative policy constraints and management strategies are considered. We hope to contribute to policy discussions in this area and connected and legislatively similar areas (such as California) as current conditions change and existing legal and policy structures are revised. Additionally, we comment on the challenges of integrating models that ostensibly are in different domains (physical and social) but that independently represent a system in which physical processes and human actions are closely intertwined and difficult to disentangle.

  5. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver;

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in pat......The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  6. An evaluation of the influence of measurement geometry on the uncertainties of photometric model results based on the laboratory measurements of particulate surfaces

    Science.gov (United States)

    Lv, Yunfeng; Sun, Zhongqiu

    2017-01-01

    Sunlight reflected by particulate surfaces carries important information about its physical properties. Modeling the reflectance of different types of particulate samples is an attractive field of study, so estimating the favorable measurement geometry for accurate inversion of photometric model parameters is necessary. This research examines the distribution of the bidirectional reflectance factor (BRF) with different particle sizes by multi-angular reflectance. Two types of particulate samples (one with low reflectance and the other with moderate reflectance) with particle sizes of 0.3, 0.45 and 0.9 mm were measured over a wide viewing range under the assumption of left-to-right symmetry of the BRF. Based on these measurements, we computed the reflectance of particulate surfaces by a photometric model and analyzed the influence of measurement geometry (different combinations of incident zenith angle, viewing zenith angle and azimuth angle) on the inverted parameters and the results modeled by the best-fit parameters. The results show that by using the measurements in the single azimuth (including the principal plane) to invert the model parameters, the difference between the modeled results and measured results will exceed the reflectance change caused by the samples' particle size; this difference is also found when we used the combined measurements at two different incident zenith angles. Including the measurements in the principal plane, an increase in the number of azimuth angles will improve the match between the modeled results and measurements. Our results also confirm that the single-scattering albedo is the only model parameter that could be empirically used to determine the particle sizes of our samples over a wide range of measurement directions. This study proposes several favorable combinations of measurement geometry and also appears to provide a promising empirical reference for the particulate surfaces similar to ours in future laboratory

  7. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamson, S. [Wisconsin Univ., Madison, WI (United States); Bender, M.A. [Brookhaven National Lab., Upton, NY (United States); Boecker, B.B.; Scott, B.R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States). Inhalation Toxicology Research Inst.; Gilbert, E.S. [Pacific Northwest Lab., Richland, WA (United States)

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  8. First Results of using the Process-based Model PROMAB-GIS for Runoff a Bedload Transport Estimation in the Lainbach Torrent Catchment Area (Benediktbeuern, Germany)

    Science.gov (United States)

    Rinderer, M.; Jenewein, S.; Ploner, A.; Sönser, T.

    2003-04-01

    As growing damage potential makes society more and more vulnerable to natural hazards, the pressure on the official authorities responsible for the guarantee of public safety is increasing rapidly. Modern computer technology, e.g. Geographical Information Systems (GIS), can provide remarkable help in assessing the risks resulting from natural hazards. The modelling in PROMAB-GIS, which is an user friendly software based on ESRI ArcView for assessing runoff and bedload transport in torrent catchments, is strongly based on interdisciplinary process-orientated field investigations. This paper presents results of the application of PROMAB-GIS to estimate the runoff and bedload transport potential of the Lainbach catchment area which has repeatedly been affected by heavy rain storms triggering remarkable events. The operational steps needed to gain process orientated, reproducible results for assessing design events in watersheds are highlighted. A key issue in this context is the need for detailed field-investigation of the geo-, bio-, hydro-inventory of a catchment area. The second part of the paper presents the model results for design events. The data of the event which caused severe damage in June 1990 provides a perfect basis for the evaluation of the model. The results show the potential of PROMAB-GIS for assessing runoff and bedload transport in alpine torrent systems.

  9. Data bases for LDEF results

    Science.gov (United States)

    Bohnhoff-Hlavacek, Gail

    1993-01-01

    The Long Duration Exposure Facility (LDEF) carried 57 experiments and 10,000 specimens for some 200 LDEF experiment investigators. The external surface of LDEF had a large variety of materials exposed to the space environment which were tested preflight, during flight, and post flight. Thermal blankets, optical materials, thermal control paints, aluminum, and composites are among the materials flown. The investigations have produced an abundance of analysis results. One of the responsibilities of the Boeing Support Contract, Materials and Systems Special Investigation Group, is to collate and compile that information into an organized fashion. The databases developed at Boeing to accomplish this task is described.

  10. A new methodology for PBL height estimations based on lidar depolarization measurements: analysis and comparison against MWR and WRF model-based results

    Science.gov (United States)

    Bravo-Aranda, Juan Antonio; de Arruda Moreira, Gregori; Navas-Guzmán, Francisco; José Granados-Muñoz, María; Guerrero-Rascado, Juan Luis; Pozo-Vázquez, David; Arbizu-Barrena, Clara; José Olmo Reyes, Francisco; Mallet, Marc; Alados Arboledas, Lucas

    2017-06-01

    The automatic and non-supervised detection of the planetary boundary layer height (zPBL) by means of lidar measurements was widely investigated during the last several years. Despite considerable advances, the experimental detection still presents difficulties such as advected aerosol layers coupled to the planetary boundary layer (PBL) which usually produces an overestimation of the zPBL. To improve the detection of the zPBL in these complex atmospheric situations, we present a new algorithm, called POLARIS (PBL height estimation based on lidar depolarisation). POLARIS applies the wavelet covariance transform (WCT) to the range-corrected signal (RCS) and to the perpendicular-to-parallel signal ratio (δ) profiles. Different candidates for zPBL are chosen and the selection is done based on the WCT applied to the RCS and δ. We use two ChArMEx (Chemistry-Aerosol Mediterranean Experiment) campaigns with lidar and microwave radiometer (MWR) measurements, conducted in 2012 and 2013, for the POLARIS' adjustment and validation. POLARIS improves the zPBL detection compared to previous methods based on lidar measurements, especially when an aerosol layer is coupled to the PBL. We also compare the zPBL provided by the Weather Research and Forecasting (WRF) numerical weather prediction (NWP) model with respect to the zPBL determined with POLARIS and the MWR under Saharan dust events. WRF underestimates the zPBL during daytime but agrees with the MWR during night-time. The zPBL provided by WRF shows a better temporal evolution compared to the MWR during daytime than during night-time.

  11. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Science.gov (United States)

    Connor, Hyunju Kim; Zesta, Eftyhia; Fedrizzi, Mariangel; Shi, Yong; Raeder, Joachim; Codrescu, Mihail V.; Fuller-Rowell, Tim J.

    2016-06-01

    The magnetosphere is a major source of energy for the Earth's ionosphere and thermosphere (IT) system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM) coupled with the Coupled Thermosphere Ionosphere Model (CTIM). OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD) equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe). CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset, which in turn

  12. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  13. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  14. Complementing data-driven and physically-based approaches for predictive morphologic modeling: Results and implication from the Red River Basin, Vietnam

    Science.gov (United States)

    Schmitt, R. J.; Bernardi, D.; Bizzi, S.; Castelletti, A.; Soncini-Sessa, R.

    2013-12-01

    During the last 30 years, the delta of the Red River (Song Hong) in northern Vietnam experienced grave morphologic degradation processes which severely impact economic activities and endanger region-wide livelihoods. Rapidly progressing river bed incision, for example, threatens the irrigation of the delta's paddy rice crops which constitute 20% of Vietnam's annual rice production. Morphologic alteration is related to a drastically changed sediment balance due to major upstream impoundments, sediment mining and land use changes, further aggravated by changing hydro-meteorological conditions. Despite the severe impacts, river morphology was so far not included into the current efforts to optimize basin wide water resource planning for a lack of suitable, not overly resource demanding modeling strategies. This paper assesses the suitability of data-driven models to provide insights into complex hydromorphologic processes and to complement and enrich physically-based modeling strategies. Hence, to identify key drivers of morphological change while evaluating impacts of future socio-economic, management and climate scenarios on river morphology and the resulting effects on key social needs (e.g. water supply, energy production and flood mitigation). Most relevant drivers and time-scales for the considered processes (e.g. incision) - from days to decades - were identified from hydrologic and sedimentologic time-series using a feature ranking algorithm based on random trees. The feature ranking pointed out bimodal response characteristics, with important contributions of long-to-medium (5 - 15 yrs.) and rather short (10d - 6 months) timescales. An artificial neural network (ANN), built from identified variables, subsequently quantified in detail how these temporal components control long term trends, inter-seasonal fluctuations and day to day variations in morphologic processes. Whereas the general trajectory of incision relates, for example, to the overall regional

  15. Ocean EcoSystem Modelling Based on Observations from Satellite and In-Situ Data: First Results from the OSMOSIS Project

    Science.gov (United States)

    Rio, M.-H.; Buongiorno-Nardelli, B.; Calmettes, B.; Conchon, A.; Droghei, R.; Guinehut, S.; Larnicol, G.; Lehodey, P.; Matthieu, P. P.; Mulet, S.; Santoleri, R.; Senina, I.; Stum, J.; Verbrugge, N.

    2015-12-01

    Micronekton organisms are both the prey of large ocean predators, and themselves also the predators of eggs and larvae of many species from which most fishes. The micronekton biomass concentration is therefore a key explanatory variable that is usually missing in fish population and ecosystem models to understand individual behaviour and population dynamics of large oceanic predators. In that context, the OSMOSIS (Ocean ecoSystem Modelling based on Observations from Satellite and In-Situ data) ESA project aims at demonstrating the feasibility and prototyping an integrated system going from the synergetic use of many different variables measured from space to the modelling of the distribution of micronektonic organisms. In this paper, we present how data from CRYOSAT, GOCE, SMOS, ENVISAT, together with other non-ESA satellites and in-situ data, can be merged to provide the required key variables needed as input of the micronekton model. Also, first results from the optimization of the micronekton model are presented and discussed.

  16. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK

    Science.gov (United States)

    Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.

    2014-12-01

    Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.

  17. Search Result Diversification Based on Query Facets

    Institute of Scientific and Technical Information of China (English)

    胡莎; 窦志成; 王晓捷; 继荣

    2015-01-01

    In search engines, different users may search for different information by issuing the same query. To satisfy more users with limited search results, search result diversification re-ranks the results to cover as many user intents as possible. Most existing intent-aware diversification algorithms recognize user intents as subtopics, each of which is usually a word, a phrase, or a piece of description. In this paper, we leverage query facets to understand user intents in diversification, where each facet contains a group of words or phrases that explain an underlying intent of a query. We generate subtopics based on query facets and propose faceted diversification approaches. Experimental results on the public TREC 2009 dataset show that our faceted approaches outperform state-of-the-art diversification models.

  18. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  19. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    Science.gov (United States)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  20. The Cartridge Theory: a description of the functioning of horizontal subsurface flow constructed wetlands for wastewater treatment, based on modelling results.

    Science.gov (United States)

    Samsó, Roger; García, Joan

    2014-03-01

    Despite the fact that horizontal subsurface flow constructed wetlands have been in operation for several decades now, there is still no clear understanding of some of their most basic internal functioning patterns. To fill this knowledge gap, on this paper we present what we call "The Cartridge Theory". This theory was derived from simulation results obtained with the BIO_PORE model and explains the functioning of urban wastewater treatment wetlands based on the interaction between bacterial communities and the accumulated solids leading to clogging. In this paper we start by discussing some changes applied to the biokinetic model implemented in BIO_PORE (CWM1) so that the growth of bacterial communities is consistent with a well-known population dynamics models. This discussion, combined with simulation results for a pilot wetland system, led to the introduction of "The Cartridge Theory", which states that the granular media of horizontal subsurface flow wetlands can be assimilated to a generic cartridge which is progressively consumed (clogged) with inert solids from inlet to outlet. Simulations also revealed that bacterial communities are poorly distributed within the system and that their location is not static but changes over time, moving towards the outlet as a consequence of the progressive clogging of the granular media. According to these findings, the life-span of constructed wetlands corresponds to the time when bacterial communities are pushed as much towards the outlet that their biomass is not anymore sufficient to remove the desirable proportion of the influent pollutants. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. The dust environment of comet 67P/Churyumov-Gerasimenko: results from Monte Carlo dust tail modelling applied to a large ground-based observation data set

    Science.gov (United States)

    Moreno, Fernando; Muñoz, Olga; Gutiérrez, Pedro J.; Lara, Luisa M.; Snodgrass, Colin; Lin, Zhong Y.; Della Corte, Vincenzo; Rotundi, Alessandra; Yagi, Masafumi

    2017-07-01

    We present an extensive data set of ground-based observations and models of the dust environment of comet 67P/Churyumov-Gerasimenko covering a large portion of the orbital arc from about 4.5 au pre-perihelion through 3.0 au post-perihelion, acquired during the current orbit. In addition, we have also applied the model to a dust trail image acquired during this orbit, as well as to dust trail observations obtained during previous orbits, in both the visible and the infrared. The results of the Monte Carlo modelling of the dust tail and trail data are generally consistent with the in situ results reported so far by the Rosetta instruments Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) and Grain Impact Analyser and Dust Accumulator (GIADA). We found the comet nucleus already active at 4.5 au pre-perihelion, with a dust production rate increasing up to ˜3000 kg s-1 some 20 d after perihelion passage. The dust size distribution at sizes smaller than r = 1 mm is linked to the nucleus seasons, being described by a power law of index -3.0 during the comet nucleus southern hemisphere winter but becoming considerably steeper, with values between -3.6 and -4.3, during the nucleus southern hemisphere summer, which includes perihelion passage (from about 1.7 au inbound to 2.4 au outbound). This agrees with the increase of the steepness of the dust size distribution found from GIADA measurements at perihelion showing a power index of -3.7. The size distribution at sizes larger than 1 mm for the current orbit is set to a power law of index -3.6, which is near the average value of insitu measurements by OSIRIS on large particles. However, in order to fit the trail data acquired during past orbits previous to the 2009 perihelion passage, a steeper power-law index of -4.1 has been set at those dates, in agreement with previous trail modelling. The particle sizes are set at a minimum of r = 10 μm, and a maximum size, which increases with decreasing heliocentric

  2. Population Physiologically-Based Pharmacokinetic Modeling for the Human Lactational Transfer of PCB 153 with Consideration of Worldwide Human Biomonitoring Results

    Energy Technology Data Exchange (ETDEWEB)

    Redding, Laurel E.; Sohn, Michael D.; McKone, Thomas E.; Wang, Shu-Li; Hsieh, Dennis P. H.; Yang, Raymond S. H.

    2008-03-01

    We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and compare predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The model successfully describes the range of possible PCB 153 dispositions in maternal milk, suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.

  3. Climate change impacts on hydrological processes in Norway based on two methods for transferring regional climate model results to meteorological station sites

    OpenAIRE

    Beldring, Stein; Engen-Skaugen, Torill; Førland, Eirik J.; Roald, Lars A.

    2008-01-01

    Climate change impacts on hydrological processes in Norway have been estimated through combination of results from the IPCC SRES A2 and B2 emission scenarios, global climate models from the Hadley Centre and the Max-Planck Institute, and dynamical downscaling using the RegClim HIRHAM regional climate model. Temperature and precipitation simulations from the regional climate model were transferred to meteorological station sites using two different approaches, the delta change or perturbation ...

  4. Burden and outcomes of pressure ulcers in cancer patients receiving the Kerala model of home based palliative care in India: Results from a prospective observational study

    Directory of Open Access Journals (Sweden)

    Biji M Sankaran

    2015-01-01

    Full Text Available Aim: To report the prevalence and outcomes of pressure ulcers (PU seen in a cohort of cancer patients requiring home-based palliative care. Materials and Methods: All patients referred for home care were eligible for this prospective observational study, provided they were living within a distance of 35 km from the institute and gave informed consent. During each visit, caregivers were trained and educated for providing nursing care for the patient. Dressing material for PU care was provided to all patients free of cost and care methods were demonstrated. Factors influencing the occurrence and healing of PUs were analyzed using logistic regression. Duration for healing of PU was calculated using the Kaplan Meier method. P < 0.05 are taken as significant. Results: Twenty-one of 108 (19.4% enrolled patients had PU at the start of homecare services. None of the patients developed new PU during the course of home care. Complete healing of PU was seen in 9 (42.9% patients. The median duration for healing of PU was found to be 56 days. Median expenditure incurred in patients with PU was Rs. 2323.40 with a median daily expenditure of Rs. 77.56. Conclusions: The present model of homecare service delivery was found to be effective in the prevention and management of PUs. The high prevalence of PU in this cohort indicates a need for greater awareness for this complication. Clinical Trial Registry Number: CTRI/2014/03/004477

  5. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...

  6. Climate change impacts on hydrological processes in Norway based on two methods for transferring regional climate model results to meteorological station sites

    Science.gov (United States)

    Beldring, Stein; Engen-Skaugen, Torill; Førland, Eirik J.; Roald, Lars A.

    2008-05-01

    Climate change impacts on hydrological processes in Norway have been estimated through combination of results from the IPCC SRES A2 and B2 emission scenarios, global climate models from the Hadley Centre and the Max-Planck Institute, and dynamical downscaling using the RegClim HIRHAM regional climate model. Temperature and precipitation simulations from the regional climate model were transferred to meteorological station sites using two different approaches, the delta change or perturbation method and an empirical adjustment procedure that reproduces observed monthly means and standard deviations for the control period. These climate scenarios were used for driving a spatially distributed version of the HBV hydrological model, yielding a set of simulations for the baseline period 1961-1990 and projections of climate change impacts on hydrological processes for the period 2071-2100. A comparison between the two methods used for transferring regional climate model results to meteorological station sites is provided by comparing the results from the hydrological model for basins located in different parts of Norway. Projected changes in runoff are linked to changes in the snow regime. Snow cover will be more unstable and the snowmelt flood will occur earlier in the year. Increased rainfall leads to higher runoff in the autumn and winter.

  7. Climate change impacts on hydrological processes in Norway based on two methods for transferring regional climate model results to meteorological station sites

    Energy Technology Data Exchange (ETDEWEB)

    Beldring, Stein; Roald, Lars A. (Norwegian Water Resources and Energy Directorate, PO Box 5091 Majorstua, 0301 Oslo (Norway)). e-mail: stein.beldring@nve.no; Engen-Skaugen, Torill; Foerland, Eirik J. (Norwegian Meteorological Inst., PO Box 43 Blindern, 0313 Oslo (Norway))

    2008-07-01

    Climate change impacts on hydrological processes in Norway have been estimated through combination of results from the IPCC SRES A2 and B2 emission scenarios, global climate models from the Hadley Centre and the Max- Planck Institute, and dynamical downscaling using the RegClim HIRHAM regional climate model. Temperature and precipitation simulations from the regional climate model were transferred to meteorological station sites using two different approaches, the delta change or perturbation method and an empirical adjustment procedure that reproduces observed monthly means and standard deviations for the control period. These climate scenarios were used for driving a spatially distributed version of the HBV hydrological model, yielding a set of simulations for the baseline period 1961- 1990 and projections of climate change impacts on hydrological processes for the period 2071-2100. A comparison between the two methods used for transferring regional climate model results to meteorological station sites is provided by comparing the results from the hydrological model for basins located in different parts of Norway. Projected changes in runoff are linked to changes in the snow regime. Snow cover will be more unstable and the snow melt flood will occur earlier in the year. Increased rainfall leads to higher runoff in the autumn and winter

  8. The influence of vegetation, fire spread and fire behaviour on biomass burning and trace gas emissions: results from a process-based model

    Directory of Open Access Journals (Sweden)

    K. Thonicke

    2010-06-01

    Full Text Available A process-based fire regime model (SPITFIRE has been developed, coupled with ecosystem dynamics in the LPJ Dynamic Global Vegetation Model, and used to explore fire regimes and the current impact of fire on the terrestrial carbon cycle and associated emissions of trace atmospheric constituents. The model estimates an average release of 2.24 Pg C yr−1 as CO2 from biomass burning during the 1980s and 1990s. Comparison with observed active fire counts shows that the model reproduces where fire occurs and can mimic broad geographic patterns in the peak fire season, although the predicted peak is 1–2 months late in some regions. Modelled fire season length is generally overestimated by about one month, but shows a realistic pattern of differences among biomes. Comparisons with remotely sensed burnt-area products indicate that the model reproduces broad geographic patterns of annual fractional burnt area over most regions, including the boreal forest, although interannual variability in the boreal zone is underestimated.

  9. The influence of vegetation, fire spread and fire behaviour on biomass burning and trace gas emissions: results from a process-based model

    Directory of Open Access Journals (Sweden)

    K. Thonicke

    2010-01-01

    Full Text Available A process-based fire regime model (SPITFIRE has been developed, coupled with ecosystem dynamics in the LPJ Dynamic Global Vegetation Model, and used to explore spatial and temporal patterns of fire regimes and the current impact of fire on the terrestrial carbon cycle and associated emissions of trace atmospheric constituents. The model estimates an average release of 2.24 Pg C yr−1 as CO2 from biomass burning during the 1980s and 1990s. Comparison with observed active fire counts shows that the model reproduces where fire occurs and can mimic broad geographic patterns in the peak fire season, although the predicted peak is 1–2 months late in some regions. Modelled fire season length is generally overestimated by about one month, but shows a realistic pattern of differences among biomes. Comparisons with remotely sensed burnt-area products indicate that the model reproduces broad geographic patterns of annual fractional burnt area over most regions, including the boreal forest, although interannual variability in the boreal zone is underestimated. Overall SPITFIRE produces realistic simulations of spatial and temporal patterns of fire under modern conditions and of the current impact of fire on the terrestrial carbon cycle and associated emissions of trace greenhouse gases and aerosols.

  10. Hydraulic fracture model comparison study: Complete results

    Energy Technology Data Exchange (ETDEWEB)

    Warpinski, N.R. [Sandia National Labs., Albuquerque, NM (United States); Abou-Sayed, I.S. [Mobil Exploration and Production Services (United States); Moschovidis, Z. [Amoco Production Co. (US); Parker, C. [CONOCO (US)

    1993-02-01

    Large quantities of natural gas exist in low permeability reservoirs throughout the US. Characteristics of these reservoirs, however, make production difficult and often economic and stimulation is required. Because of the diversity of application, hydraulic fracture design models must be able to account for widely varying rock properties, reservoir properties, in situ stresses, fracturing fluids, and proppant loads. As a result, fracture simulation has emerged as a highly complex endeavor that must be able to describe many different physical processes. The objective of this study was to develop a comparative study of hydraulic-fracture simulators in order to provide stimulation engineers with the necessary information to make rational decisions on the type of models most suited for their needs. This report compares the fracture modeling results of twelve different simulators, some of them run in different modes for eight separate design cases. Comparisons of length, width, height, net pressure, maximum width at the wellbore, average width at the wellbore, and average width in the fracture have been made, both for the final geometry and as a function of time. For the models in this study, differences in fracture length, height and width are often greater than a factor of two. In addition, several comparisons of the same model with different options show a large variability in model output depending upon the options chosen. Two comparisons were made of the same model run by different companies; in both cases the agreement was good. 41 refs., 54 figs., 83 tabs.

  11. Model-based geostatistics

    CERN Document Server

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  12. Some results regarding the comparison of the Earth's atmospheric models

    Directory of Open Access Journals (Sweden)

    Šegan S.

    2005-01-01

    Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.

  13. Modeling Malaysia's Energy System: Some Preliminary Results

    Directory of Open Access Journals (Sweden)

    Ahmad M. Yusof

    2011-01-01

    Full Text Available Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysia’s energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors. The integration to the economic sectors is done exogeneously by specifying the annual sectoral energy demand levels. The model in turn optimizes the energy variables for a specified objective function to meet those demands. Results: By minimizing the inter temporal petroleum product imports for the crude oil system the annual extraction level of Tapis blend is projected at 579600 barrels per day. The aggregate demand for petroleum products is projected to grow at 2.1% year-1 while motor gasoline and diesel constitute 42 and 38% of the petroleum products demands mix respectively over the 5 year planning period. Petroleum products import is expected to grow at 6.0% year-1. Conclusion: The preliminary results indicate that the model performs as expected. Thus other types of energy carriers such as natural gas, coal and biomass will be added to the energy system for the overall development of Malaysia energy model.

  14. Application of A Global-To-Beam Irradiance Model to the Satellite-Based NASA GEWEX SRB Data and Validation of the Results against the Ground-Based BSRN Data

    Science.gov (United States)

    Zhang, T.; Stackhouse, P. W., Jr.; Chandler, W.; Hoell, J. M.; Westberg, D. J.

    2012-12-01

    The NASA/GEWEX SRB (Surface Radiation Budget) project has produced a 24.5-year continuous global record of shortwave and longwave radiation flux dataset at TOA and the Earth's surface from satellite measurements. The time span of the data is from July 1983 to December 2007, and the spatial resolution is 1 degree latitude by 1 degree longitude. SRB products are available on 3-hourly, 3-hourly-monthly, daily and monthly time scales. The inputs to the models include: 1.) Cloud parameters derived from pixel-level DX product of the International Satellite Cloud Climatology Project (ISCCP); 2.) Temperature and moisture profiles of the atmosphere generated with the Goddard Earth Observing System model Version 4.0.3 (GEOS-4.0.3) from a 4-D data assimilation product of the Data Assimilation Office at NASA Goddard Space Flight Center; 3.) Atmospheric column ozone record constructed from the Total Ozone Mapping Spectrometer (TOMS) aboard Nimbus-7 (July 1983 - November 1994), from the Operational Vertical Sounder aboard the Television Infrared Observation Satellite (TIROS, TOVS) (December 1994 - October 1995), from Ozone Monitoring Instrument (OMI), and from Stratospheric Monitoring Ozone Blended Analysis (SMOBA) products; 4.) Surface albedos based on monthly climatological clear-sky albedos at the top of the atmosphere (TOA) which in turn were derived from the NASA Clouds and the Earth's Radiant Energy System (CERES) data during 2000-2005; 5.) Surface emissivities from a map developed at NASA Langley Research Center. The SRB global irradiances have been extensively validated against the ground-based BSRN (Baseline Surface Radiation Network), GEBA (Global Energy Balance Archive), and WRDC (World Radiation Data Centre) data, and generally good agreement is achieved. In this paper, we apply the DirIndex model, a modified version of the DirInt model, to the SRB 3-hourly global irradiances and derive the 3-hourly beam, or direct normal, irradiances. Daily and monthly mean direct

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  17. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  18. Simulation Modeling of Radio Direction Finding Results

    Directory of Open Access Journals (Sweden)

    K. Pelikan

    1994-12-01

    Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.

  19. Demystifying Results-Based Performance Measurement.

    Science.gov (United States)

    Jorjani, Hamid

    Many evaluators are convinced that Results-based Performance Measurement (RBPM) is an effective tool to improve service delivery and cost effectiveness in both public and private sectors. Successful RBPM requires self-directed and cross-functional work teams and the supporting infrastructure to make it work. There are many misconceptions and…

  20. Hepatic and Extrahepatic Insulin Clearance Are Differentially Regulated: Results From a Novel Model-Based Analysis of Intravenous Glucose Tolerance Data.

    Science.gov (United States)

    Polidori, David C; Bergman, Richard N; Chung, Stephanie T; Sumner, Anne E

    2016-06-01

    Insulin clearance is a highly variable and important factor that affects circulating insulin concentrations. We developed a novel model-based method to estimate both hepatic and extrahepatic insulin clearance using plasma insulin and C-peptide profiles obtained from the insulin-modified frequently sampled intravenous glucose tolerance test. Data from 100 African immigrants without diabetes (mean age 38 years, body weight 81.7 kg, fasting plasma glucose concentration 83 mg/dL, and fasting insulin concentration 37 pmol/L) were used. Endogenous insulin secretion (calculated by C-peptide deconvolution) and insulin infusion rates were used as inputs to a new two-compartment model of insulin kinetics and hepatic and extrahepatic clearance parameters were estimated. Good agreement between modeled and measured plasma insulin profiles was observed (mean normalized root mean square error 6.8%), and considerable intersubject variability in parameters of insulin clearance among individuals was identified (the mean [interquartile range] for hepatic extraction was 25.8% [32.7%], and for extrahepatic insulin clearance was 20.7 mL/kg/min [11.7 mL/kg/min]). Parameters of insulin clearance were correlated with measures of insulin sensitivity and acute insulin response to glucose. The method described appears promising for future research aimed at characterizing variability in insulin clearance and the mechanisms involved in the regulation of insulin clearance.

  1. Comet dust as a mixture of aggregates and solid particles: model consistent with ground-based and space-mission results

    CERN Document Server

    Kolokolova, L

    2009-01-01

    The most successful model of comet dust presents comet particles as aggregates of submicron grains. It qualitatively explains the spectral and angular change in the comet brightness and polarization and is consistent with the thermal infrared data and composition of the comet dust obtained {\\it in situ} for comet 1P/Halley. However, it experiences some difficulties in providing a quantitative fit to the observational data. Here we present a model that considers comet dust as a mixture of aggregates and compact particles. The model is based on the Giotto and Stardust mission findings that both aggregates (made mainly of organics, silicates, and carbon) and solid silicate particles are present in the comet dust. We simulate aggregates as {\\bf Ballistic Cluster-Cluster Aggregates (BCCA)} and compact particles as polydisperse spheroids with some distribution of the aspect ratio. The particles follow a power-law size distribution with the power -3 that is close to the one obtained for comet dust {\\it in situ}, at ...

  2. Air-Sea Exchange of Legacy POPs in the North Sea Based on Results of Fate and Transport, and Shelf-Sea Hydrodynamic Ocean Models

    Directory of Open Access Journals (Sweden)

    Kieran O'Driscoll

    2014-04-01

    Full Text Available The air-sea exchange of two legacy persistent organic pollutants (POPs, γ-HCH and PCB 153, in the North Sea, is presented and discussed using results of regional fate and transport and shelf-sea hydrodynamic ocean models for the period 1996–2005. Air-sea exchange occurs through gas exchange (deposition and volatilization, wet deposition and dry deposition. Atmospheric concentrations are interpolated into the model domain from results of the EMEP MSC-East multi-compartmental model (Gusev et al, 2009. The North Sea is net depositional for γ-HCH, and is dominated by gas deposition with notable seasonal variability and a downward trend over the 10 year period. Volatilization rates of γ-HCH are generally a factor of 2–3 less than gas deposition in winter, spring and summer but greater in autumn when the North Sea is net volatilizational. A downward trend in fugacity ratios is found, since gas deposition is decreasing faster than volatilization. The North Sea is net volatilizational for PCB 153, with highest rates of volatilization to deposition found in the areas surrounding polluted British and continental river sources. Large quantities of PCB 153 entering through rivers lead to very high local rates of volatilization.

  3. Unfavourable results in skull base surgery

    Directory of Open Access Journals (Sweden)

    Hemen Jaju

    2013-01-01

    Full Text Available Treatment of skull base tumors involves multiple specialities. The lesions are usually advanced and the treatment is often associated with unfavorable results, which may be functional and/or aesthetic. Here we have done an analysis for the complications and unfavorable results of 546 cases treated surgically by a single craniofacial surgeon over a period of 14 years. The major morbidity ranges from death to permanent impairment of vital organ functions (brain, eye, nose, infections, tissue losses, flap failures, treatment associated complications, psychosocial issues, and aesthesis besides others. This article is aimed at bringing forth these unfavorable results and how to avoid them.

  4. Ranking and mapping of universities and research-focused institutions worldwide based on highly-cited papers: A visualization of results from multi-level models

    CERN Document Server

    Bornmann, Lutz; Anegón, Felix de Moya; Mutz, Rüdiger

    2012-01-01

    The web application presented in this paper allows for an analysis to reveal centres of excellence in different fields worldwide using publication and citation data. Only specific aspects of institutional performance are taken into account and other aspects such as teaching performance or societal impact of research are not considered. Based on data gathered from Scopus, field-specific excellence can be identified in institutions where highly-cited papers have been frequently published. The web application combines both a list of institutions ordered by different indicator values and a map with circles visualizing indicator values for geocoded institutions. Compared to the mapping and ranking approaches introduced hitherto, our underlying statistics (multi-level models) are analytically oriented by allowing (1) the estimation of values for the number of excellent papers for an institution which are statistically more appropriate than the observed values; (2) the calculation of confidence intervals as measures...

  5. Results-based management - Developing one's key results areas (KRAs).

    Science.gov (United States)

    Kansal, Om Prakash; Goel, Sonu

    2015-01-01

    In spite of aspiring to be a good manager, we public health experts fail to evaluate ourselves against our personal and professional goals. The Key Result Areas (KRAs) or key performance indicators (KPIs) help us in setting our operational (day-to-day) and/or strategic (long-term) goals followed by grading ourselves at different times of our careers. These shall help in assessing our strengths and weaknesses. The weakest KRA should set the maximum extent to which one should use his/her skills and abilities to have the greatest impact on his/her career.

  6. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  7. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  8. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  9. Engineering model development and test results

    Science.gov (United States)

    Wellman, John A.

    1993-08-01

    The correctability of the primary mirror spherical error in the Wide Field/Planetary Camera (WF/PC) is sensitive to the precise alignment of the incoming aberrated beam onto the corrective elements. Articulating fold mirrors that provide +/- 1 milliradian of tilt in 2 axes are required to allow for alignment corrections in orbit as part of the fix for the Hubble space telescope. An engineering study was made by Itek Optical Systems and the Jet Propulsion Laboratory (JPL) to investigate replacement of fixed fold mirrors within the existing WF/PC optical bench with articulating mirrors. The study contract developed the base line requirements, established the suitability of lead magnesium niobate (PMN) actuators and evaluated several tilt mechanism concepts. Two engineering model articulating mirrors were produced to demonstrate the function of the tilt mechanism to provide +/- 1 milliradian of tilt, packaging within the space constraints and manufacturing techniques including the machining of the invar tilt mechanism and lightweight glass mirrors. The success of the engineering models led to the follow on design and fabrication of 3 flight mirrors that have been incorporated into the WF/PC to be placed into the Hubble Space Telescope as part of the servicing mission scheduled for late 1993.

  10. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  11. A Proficiency-Based Progression Training Curriculum Coupled With a Model Simulator Results in the Acquisition of a Superior Arthroscopic Bankart Skill Set.

    Science.gov (United States)

    Angelo, Richard L; Ryu, Richard K N; Pedowitz, Robert A; Beach, William; Burns, Joseph; Dodds, Julie; Field, Larry; Getelman, Mark; Hobgood, Rhett; McIntyre, Louis; Gallagher, Anthony G

    2015-10-01

    To determine the effectiveness of proficiency-based progression (PBP) training using simulation both compared with the same training without proficiency requirements and compared with a traditional resident course for learning to perform an arthroscopic Bankart repair (ABR). In a prospective, randomized, blinded study, 44 postgraduate year 4 or 5 orthopaedic residents from 21 Accreditation Council for Graduate Medical Education-approved US orthopaedic residency programs were randomly assigned to 1 of 3 skills training protocols for learning to perform an ABR: group A, traditional (routine Arthroscopy Association of North America Resident Course) (control, n = 14); group B, simulator (modified curriculum adding a shoulder model simulator) (n = 14); or group C, PBP (PBP plus the simulator) (n = 16). At the completion of training, all subjects performed a 3 suture anchor ABR on a cadaveric shoulder, which was videotaped and scored in blinded fashion with the use of previously validated metrics. The PBP-trained group (group C) made 56% fewer objectively assessed errors than the traditionally trained group (group A) (P = .011) and 41% fewer than group B (P = .049) (both comparisons were statistically significant). The proficiency benchmark was achieved on the final repair by 68.7% of participants in group C compared with 36.7% in group B and 28.6% in group A. When compared with group A, group B participants were 1.4 times, group C participants were 5.5 times, and group C(PBP) participants (who met all intermediate proficiency benchmarks) were 7.5 times as likely to achieve the final proficiency benchmark. A PBP training curriculum and protocol coupled with the use of a shoulder model simulator and previously validated metrics produces a superior arthroscopic Bankart skill set when compared with traditional and simulator-enhanced training methods. Surgical training combining PBP and a simulator is efficient and effective. Patient safety could be improved if

  12. Employment Effects of Renewable Energy Expansion on a Regional Level—First Results of a Model-Based Approach for Germany

    Directory of Open Access Journals (Sweden)

    Ulrike Lehr

    2012-02-01

    Full Text Available National studies have shown that both gross and net effects of the expansion of energy from renewable sources on employment are positive for Germany. These modeling approaches also revealed that this holds true for both present and future perspectives under certain assumptions on the development of exports, fossil fuel prices and national politics. Yet how are employment effects distributed within Germany? What components contribute to growth impacts on a regional level? To answer these questions (new methods of regionalization were explored and developed for the example “wind energy onshore” for Germany’s federal states. The main goal was to develop a methodology which is applicable to all renewable energy technologies in future research. For the quantification and projection, it was necessary to distinguish between jobs generated by domestic investments and exports on the one hand, and jobs for operation and maintenance of existing plants on the other hand. Further, direct and indirect employment is analyzed. The results show, that gross employment is particularly high in the northwestern regions of Germany. However, especially the indirect effects are spread out over the whole country. Regions in the south not only profit from the delivery of specific components, but also from other industry and service inputs.

  13. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  14. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  15. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  16. Slice-based supine to standing postured deformation for Chinese anatomical models and the dosimetric results by wide band frequency electromagnetic field exposure: morphing.

    Science.gov (United States)

    Wu, Tongning; Tan, Liwen; Shao, Qing; Li, Ying; Yang, Lei; Zhao, Chen; Xie, Yi; Zhang, Shaoxiang

    2013-04-01

    Digital human models are frequently obtained from supine-postured medical images or cadaver slices, but many applications require standing models. This paper presents the work of reconstructing standing Chinese adult anatomical models from supine postured slices. Apart from the previous studies, the deformation works on 2-D segmented slices. The surface profile of the standing posture is adjusted by population measurement data. A non-uniform texture amplification approach is applied on the 2-D slices to recover the skin contour and to redistribute the internal tissues. Internal organ shift due to postures is taken into account. The feet are modified by matrix rotation. Then, the supine and standing models are utilised for the evaluation of electromagnetic field exposure over wide band frequency and different incident directions.

  17. Calculation of limits for significant bidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G;

    2015-01-01

    .01). RESULTS: From an individual factors used to multiply the first result were calculated to create limits for constant cumulated significant changes. The factors were shown to become a function of the number of results included and the total coefficient of variation. CONCLUSIONS: The first result should...

  18. Development and test results of the Realtime Severe Accident Model 5 (RSAM5) based on the MAAP5 For the Kori 1 simulator

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Hyuk; Lee, Myeong Soo [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    The Real Time Severe Accident Model (RSAM) in the Kori simulator employs the standard MAAP 5.01.1101 code (which is defined as MAAP 5.01) plus several statically linked libraries that interface with the simulator environment. The physical phenomena that can be envisioned inside the reactor vessel, the reactor coolant system (RCS), and the containment during severe accidents are comprehensively modeled by the MAAP5 code. The MAAP5 code has been known to be a reliable tool for understanding the sequence of events that occur during severe LWR accidents, evaluating the consequences of the failure of emergency systems, assessing the effects of operator interventions, and investigating the influence of design features of the RCS, containment, and safety systems on the accident consequences. The purpose of this paper is to describe the modeling of the Kori Unit 1 nuclear plant with the MAAP5 code and major outputs in the event of the SBO, SBO + SGTR, SBO + LBLOCA.

  19. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  20. Semantic Map Based Web Search Result Visualization

    OpenAIRE

    2007-01-01

    The problem of information overload has become more pressing with the emergence of the increasingly more popular Internet services. The main information retrieval mechanisms provided by the prevailing Internet Web software are based on either keyword search (e.g., Google and Yahoo) or hypertext browsing (e.g., Internet Explorer and Netscape). The research presented in this paper is aimed at providing an alternative concept-based categorization and search capability based on a combination of m...

  1. Modeling Malaysia's Energy System: Some Preliminary Results

    OpenAIRE

    Ahmad M. Yusof

    2011-01-01

    Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining) through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors). The integration to the economic sectors is done exogene...

  2. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  3. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  4. Implementation of Problem Based Learning Model in Concept Learning Mushroom as a Result of Student Learning Improvement Efforts Guidelines for Teachers

    Science.gov (United States)

    Rubiah, Musriadi

    2016-01-01

    Problem based learning is a training strategy, students work together in groups, and take responsibility for solving problems in a professional manner. Instructional materials such as textbooks become the main reference of students in study of mushrooms, especially the material is considered less effective in responding to the information needs of…

  5. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  6. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  7. Experimental Results and Model Calculations of a Hybrid Adsorption-Compression Heat Pump Based on a Roots Compressor and Silica Gel-Water Sorption

    Energy Technology Data Exchange (ETDEWEB)

    Van der Pal, M.; De Boer, R.; Wemmers, A.K.; Smeding, S.F.; Veldhuis, J.B.J.; Lycklama a Nijeholt, J.A.

    2013-10-15

    Thermally driven sorption systems can provide significant energy savings, especially in industrial applications. The driving temperature for operation of such systems limits the operating window and can be a barrier for market-introduction. By adding a compressor, the sorption cycle can be run using lower waste heat temperatures. ECN has recently started the development of such a hybrid heat pump. The final goal is to develop a hybrid heat pump for upgrading lower (<100C) temperature industrial waste heat to above pinch temperatures. The paper presents the first measurements and model calculations of a hybrid heat pump system using a water-silica gel system combined with a Roots type compressor. From the measurements can be seen that the effect of the compressor is dependent on where in the cycle it is placed. When placed between the evaporator and the sorption reactor, it has a considerable larger effect compared to the compressor placed between the sorption reactor and the condenser. The latter hardly improves the performance compared to purely heat-driven operation. This shows the importance of studying the interaction between all components of the system. The model, which shows reasonable correlation with the measurements, could proof to be a valuable tool to determine the optimal hybrid heat pump configuration.

  8. Izbor optimalnog puta za kretanje organizovanog kolonskog saobraćajnog toka na osnovu rezultata modeliranja / Choosing an optimal route for organized vehicle movement based on modeling results

    Directory of Open Access Journals (Sweden)

    Radomir S. Gordić

    2006-04-01

    Full Text Available U toku planiranja i praktične realizacije zadataka jedinica Vojske SCG često se javlja problem izbora optimalnog puta između dva mesta (čvora na putnoj mreži. Kriterijumi optimizacije mogu biti različiti. Ovaj projekat treba da omogući brzo i lako određivanje optimalnog puta, primenom dinamičkog programiranja (DP, uz korišćenje Belmanovog (Bellman, algoritma u zavisnosti od izabranog kriterijuma -parametra. Kriterijum optimizacije je minimalno vreme kretanja (putovanja, koje je dobijeno imitacionim modeliranjem kolonskog saobraćajnog toka. Razrađeni algoritam omogućuje izbor optimalnog puta, za bilo koja dva čvora na mreži. / During the planning and practical realization of Serbian & Montenegro units' tasks a problem -which often occurs is choosing an optimal transport route between two places (nodes. Optimization criteria can be various. This project should enable quick and easy defining of an optimal route, applying dynamic programing (DP using Bellman's algorithm depending on chosen criteria - parameter. Optimization criteria represent minimum movement time (traveling, which are taken from imitational modeling of a traffics queue flow. Operating algorithm enable choosing an optimal transport route, for any two nodes on a road map.

  9. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  10. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  11. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more docu

  12. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  13. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  14. Standard Model physics results from ATLAS and CMS

    CERN Document Server

    Dordevic, Milos

    2015-01-01

    The most recent results of Standard Model physics studies in proton-proton collisions at 7 TeV and 8 TeV center-of-mass energy based on data recorded by ATLAS and CMS detectors during the LHC Run I are reviewed. This overview includes studies of vector boson production cross section and properties, results on V+jets production with light and heavy flavours, latest VBS and VBF results, measurement of diboson production with an emphasis on ATGC and QTGC searches, as well as results on inclusive jet cross sections with strong coupling constant measurement and PDF constraints. The outlined results are compared to the prediction of the Standard Model.

  15. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  16. Recent MEG Results and Predictive SO(10) Models

    CERN Document Server

    Fukuyama, Takeshi

    2011-01-01

    Recent MEG results of a search for the lepton flavor violating (LFV) muon decay, $\\mu \\to e \\gamma$, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidence/discovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been well-known, supersymmetric (SUSY) models can generate the $\\mu \\to e \\gamma$ decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory (GUT) models such as the minimal SUSY SO(10) model (we call this class of models "predictive SO(10) models") can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the $\\mu \\to e \\gamma$ process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO(10) models. If we ...

  17. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  18. Modeling Results for the ITER Cryogenic Fore Pump

    Science.gov (United States)

    Zhang, Dongsheng

    The work presented here is the analysis and modeling of the ITER-Cryogenic Fore Pump (CFP), also called Cryogenic Viscous Compressor (CVC). Unlike common cryopumps that are usually used to create and maintain vacuum, the cryogenic fore pump is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of the torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with the standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum, and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition of the gas fluid mixture. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. This document presents three numerical models: a transient model, a steady state model, and a hemisphere (or molecular flow) model. The first two models are developed based on analysis of the raw experimental data while the third model is developed as a preliminary study. The modeling results are compared with available experiment data for verification. The models can be used for cryopump design, and can also benefit problems, such as loss of vacuum in a cryomodule or cryogenic desublimation. The scientific and engineering investigation being done here builds connections between Mechanical Engineering and other disciplines, such as Chemical Engineering, Physics, and Chemistry.

  19. Proceedings Tenth Workshop on Model Based Testing

    OpenAIRE

    Pakulin, Nikolay; Petrenko, Alexander K.; Schlingloff, Bernd-Holger

    2015-01-01

    The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model...

  20. Why Does a Kronecker Model Result in Misleading Capacity Estimates?

    CERN Document Server

    Raghavan, Vasanthan; Sayeed, Akbar M

    2008-01-01

    Many recent works that study the performance of multi-input multi-output (MIMO) systems in practice assume a Kronecker model where the variances of the channel entries, upon decomposition on to the transmit and the receive eigen-bases, admit a separable form. Measurement campaigns, however, show that the Kronecker model results in poor estimates for capacity. Motivated by these observations, a channel model that does not impose a separable structure has been recently proposed and shown to fit the capacity of measured channels better. In this work, we show that this recently proposed modeling framework can be viewed as a natural consequence of channel decomposition on to its canonical coordinates, the transmit and/or the receive eigen-bases. Using tools from random matrix theory, we then establish the theoretical basis behind the Kronecker mismatch at the low- and the high-SNR extremes: 1) Sparsity of the dominant statistical degrees of freedom (DoF) in the true channel at the low-SNR extreme, and 2) Non-regul...

  1. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  2. RESULTS OF INTERBANK EXCHANGE RATES FORECASTING USING STATE SPACE MODEL

    Directory of Open Access Journals (Sweden)

    Muhammad Kashif

    2008-07-01

    Full Text Available This study evaluates the performance of three alternative models for forecasting daily interbank exchange rate of U.S. dollar measured in Pak rupees. The simple ARIMA models and complex models such as GARCH-type models and a state space model are discussed and compared. Four different measures are used to evaluate the forecasting accuracy. The main result is the state space model provides the best performance among all the models.

  3. Kernel model-based diagnosis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The methods for computing the kemel consistency-based diagnoses and the kernel abductive diagnoses are only suited for the situation where part of the fault behavioral modes of the components are known. The characterization of the kernel model-based diagnosis based on the general causal theory is proposed, which can break through the limitation of the above methods when all behavioral modes of each component are known. Using this method, when observation subsets deduced logically are respectively assigned to the empty or the whole observation set, the kernel consistency-based diagnoses and the kernel abductive diagnoses can deal with all situations. The direct relationship between this diagnostic procedure and the prime implicants/implicates is proved, thus linking theoretical result with implementation.

  4. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  5. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  6. Model-based tomographic reconstruction

    Science.gov (United States)

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  7. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  8. Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation

    Science.gov (United States)

    Weeden, C.; Hejduk, M.

    At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or

  9. Titan Chemistry: Results From A Global Climate Model

    Science.gov (United States)

    Wilson, Eric; West, R. A.; Friedson, A. J.; Oyafuso, F.

    2008-09-01

    We present results from a 3-dimesional global climate model of Titan's atmosphere and surface. This model, a modified version of NCAR's CAM-3 (Community Atmosphere Model), has been optimized for analysis of Titan's lower atmosphere and surface. With the inclusion of forcing from Saturn's gravitational tides, interaction from the surface, transfer of longwave and shortwave radiation, and parameterization of haze properties, constrained by Cassini observations, a dynamical field is generated, which serves to advect 14 long-lived species. The concentrations of these chemical tracers are also affected by 82 chemical reactions and the photolysis of 21 species, based on the Wilson and Atreya (2004) model, that provide sources and sinks for the advected species along with 23 additional non-advected radicals. In addition, the chemical contribution to haze conversion is parameterized along with the microphysical processes that serve to distribute haze opacity throughout the atmosphere. References Wilson, E.H. and S.K. Atreya, J. Geophys. Res., 109, E06002, 2004.

  10. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  11. U.S. electric power sector transitions required to achieve 80% reductions in economy-wide greenhouse gas emissions: Results based on a state-level model of the U.S. energy system

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.

    2017-05-01

    The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.

  12. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  13. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  14. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  15. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  16. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  17. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  18. Geochemical controls on shale groundwaters: Results of reaction path modeling

    Energy Technology Data Exchange (ETDEWEB)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs.

  19. Some Results on Ethnic Conflicts Based on Evolutionary Game Simulation

    CERN Document Server

    Qin, Jun; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-01-01

    The force of the ethnic separatism, essentially origining from negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lack supports of scientific evidences. Because ethic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: 1) the ratio of individuals with civic identity has a positive association with the frequency of ethnic conflicts; 2) ethnic conflict will not die out by killing all ethni...

  20. Compressible Turbulent Channel Flows: DNS Results and Modeling

    Science.gov (United States)

    Huang, P. G.; Coleman, G. N.; Bradshaw, P.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    The present paper addresses some topical issues in modeling compressible turbulent shear flows. The work is based on direct numerical simulation of two supersonic fully developed channel flows between very cold isothermal walls. Detailed decomposition and analysis of terms appearing in the momentum and energy equations are presented. The simulation results are used to provide insights into differences between conventional time-and Favre-averaging of the mean-flow and turbulent quantities. Study of the turbulence energy budget for the two cases shows that the compressibility effects due to turbulent density and pressure fluctuations are insignificant. In particular, the dilatational dissipation and the mean product of the pressure and dilatation fluctuations are very small, contrary to the results of simulations for sheared homogeneous compressible turbulence and to recent proposals for models for general compressible turbulent flows. This provides a possible explanation of why the Van Driest density-weighted transformation is so successful in correlating compressible boundary layer data. Finally, it is found that the DNS data do not support the strong Reynolds analogy. A more general representation of the analogy is analysed and shown to match the DNS data very well.

  1. Urban traffic noise assessment by combining measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.

    2013-01-01

    A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,

  2. Probabilistic Model-Based Safety Analysis

    CERN Document Server

    Güdemann, Matthias; 10.4204/EPTCS.28.8

    2010-01-01

    Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of proba...

  3. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  4. Model-based consensus

    NARCIS (Netherlands)

    M. Boumans

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  5. Modelling nitrogen and phosphorus cycles and dissolved oxygen in the Zhujiang Estuary Ⅱ. Model results

    Institute of Scientific and Technical Information of China (English)

    Guan Weibing; Wong Lai-Ah; Xu Dongfeng

    2001-01-01

    In the present study, the ecosystem-based water quality model was applied to the Pearl River (Zhujiang) Estuary. The model results successfully represent the distribution trend of nutrients and dissolved oxygen both in the horizontal and vertical planes during the flood season, and it shows that the model has taken into consideration the key part of the dynamical, chemical and biological processes existing in the Zhujiang Estuary. The further studies illustrate that nitrogen is in plenty while phosphorus and light limit the phytoplankton biomass in the Zhujiang Estuary during the flood season.

  6. Efficient Model-Based Exploration

    NARCIS (Netherlands)

    Wiering, M.A.; Schmidhuber, J.

    1998-01-01

    Model-Based Reinforcement Learning (MBRL) can greatly profit from using world models for estimating the consequences of selecting particular actions: an animat can construct such a model from its experiences and use it for computing rewarding behavior. We study the problem of collecting useful exper

  7. The Effect of Bathymetric Filtering on Nearshore Process Model Results

    Science.gov (United States)

    2009-01-01

    Filtering on Nearshore Process Model Results 6. AUTHOR(S) Nathaniel Plant, Kacey L. Edwards, James M. Kaihatu, Jayaram Veeramony, Yuan-Huang L. Hsu...filtering on nearshore process model results Nathaniel G. Plant **, Kacey L Edwardsb, James M. Kaihatuc, Jayaram Veeramony b, Larry Hsu’’, K. Todd Holland...assimilation efforts that require this information. Published by Elsevier B.V. 1. Introduction Nearshore process models are capable of predicting

  8. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  9. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  10. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  11. Results from a new Cocks-Ashby style porosity model

    Science.gov (United States)

    Barton, Nathan

    2017-01-01

    A new porosity evolution model is described, along with preliminary results. The formulation makes use of a Cocks-Ashby style treatment of porosity kinetics that includes rate dependent flow in the mechanics of porosity growth. The porosity model is implemented in a framework that allows for a variety of strength models to be used for the matrix material, including ones with significant changes in rate sensitivity as a function of strain rate. Results of the effect of changing strain rate sensitivity on porosity evolution are shown. The overall constitutive model update involves the coupled solution of a system of nonlinear equations.

  12. Didactic Strategy Discussion Based on Artificial Neural Networks Results.

    Science.gov (United States)

    Andina, D.; Bermúdez-Valbuena, R.

    2009-04-01

    Artificial Neural Networks (ANNs) are a mathematical model of the main known characteristics of biological brian dynamics. ANNs inspired in biological reality have been useful to design machines that show some human-like behaviours. Based on them, many experimentes have been succesfully developed emulating several biologial neurons characteristics, as learning how to solve a given problem. Sometimes, experimentes on ANNs feedback to biology and allow advances in understanding the biological brian behaviour, allowing the proposal of new therapies for medical problems involving neurons performing. Following this line, the author present results on artificial learning on ANN, and interpret them aiming to reinforce one of this two didactic estrategies to learn how to solve a given difficult task: a) To train with clear, simple, representative examples and feel confidence in brian generalization capabilities to achieve succes in more complicated cases. b) To teach with a set of difficult cases of the problem feeling confidence that the brian will efficiently solve the rest of cases if it is able to solve the difficult ones. Results may contribute in the discussion of how to orientate the design innovative succesful teaching strategies in the education field.

  13. Numerical Results of 3-D Modeling of Moon Accumulation

    Science.gov (United States)

    Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr

    2014-05-01

    For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity

  14. Empirically Based, Agent-based models

    Directory of Open Access Journals (Sweden)

    Elinor Ostrom

    2006-12-01

    Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.

  15. Updated Results for the Wake Vortex Inverse Model

    Science.gov (United States)

    Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).

  16. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  17. Vaccination strategies for SEIR models using feedback linearization. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    A linearization-based feedback-control strategy for a SEIR epidemic model is discussed. The vaccination objective is the asymptotically tracking of the removed-by-immunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically tend to zero. The disease controlpolicy is designed based on a feedback linearization technique which provides a general method to generate families of vaccination policies with sound technical background.

  18. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  19. Phylogenetic invariants for group-based models

    CERN Document Server

    Donten-Bury, Maria

    2010-01-01

    In this paper we investigate properties of algebraic varieties representing group-based phylogenetic models. We give the (first) example of a nonnormal general group-based model for an abelian group. Following Kaie Kubjas we also determine some invariants of group-based models showing that the associated varieties do not have to be deformation equivalent. We propose a method of generating many phylogenetic invariants and in particular we show that our approach gives the whole ideal of the claw tree for 3-Kimura model under the assumption of the conjecture of Sturmfels and Sullivant. This, combined with the results of Sturmfels and Sullivant, would enable to determine all phylogenetic invariants for any tree for 3-Kimura model and possibly for other group-based models.

  20. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related activit...

  1. Testing of Subgrid—Scale Stress Models by Using Results from Direct Numerical SImulations

    Institute of Scientific and Technical Information of China (English)

    HongruiGONG

    1998-01-01

    The most commonly used dynamic subgrid models,Germano's model and dynamic kinetic energy model,and their base models-the Smagorinsky model and the kinetic energy model,were tested using results from direct numerical simulations of various turbulent flows.In germano's dynamic model,the model coefficient was treated as a constant within the test filter,This treatment is conceptually inconsistent.An iteration procedure was proposed to calculate the model coefficient and an improved correlation coefficient was found.

  2. Intelligent model-based OPC

    Science.gov (United States)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm

  3. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  4. Result Diversification Based on Query-Specific Cluster Ranking

    NARCIS (Netherlands)

    J. He (Jiyin); E. Meij; M. de Rijke

    2011-01-01

    htmlabstractResult diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking,

  5. Result diversification based on query-specific cluster ranking

    NARCIS (Netherlands)

    He, J.; Meij, E.; de Rijke, M.

    2011-01-01

    Result diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking, in which diversification

  6. Result diversification based on query-specific cluster ranking

    NARCIS (Netherlands)

    He, J.; Meij, E.; de Rijke, M.

    2011-01-01

    Result diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking, in which diversification

  7. Dynamic contrast-enhanced CT of head and neck tumors: perfusion measurements using a distributed-parameter tracer kinetic model. Initial results and comparison with deconvolution-based analysis

    Science.gov (United States)

    Bisdas, Sotirios; Konstantinou, George N.; Sherng Lee, Puor; Thng, Choon Hua; Wagenblast, Jens; Baghi, Mehran; San Koh, Tong

    2007-10-01

    The objective of this work was to evaluate the feasibility of a two-compartment distributed-parameter (DP) tracer kinetic model to generate functional images of several physiologic parameters from dynamic contrast-enhanced CT data obtained of patients with extracranial head and neck tumors and to compare the DP functional images to those obtained by deconvolution-based DCE-CT data analysis. We performed post-processing of DCE-CT studies, obtained from 15 patients with benign and malignant head and neck cancer. We introduced a DP model of the impulse residue function for a capillary-tissue exchange unit, which accounts for the processes of convective transport and capillary-tissue exchange. The calculated parametric maps represented blood flow (F), intravascular blood volume (v1), extravascular extracellular blood volume (v2), vascular transit time (t1), permeability-surface area product (PS), transfer ratios k12 and k21, and the fraction of extracted tracer (E). Based on the same regions of interest (ROI) analysis, we calculated the tumor blood flow (BF), blood volume (BV) and mean transit time (MTT) by using a modified deconvolution-based analysis taking into account the extravasation of the contrast agent for PS imaging. We compared the corresponding values by using Bland-Altman plot analysis. We outlined 73 ROIs including tumor sites, lymph nodes and normal tissue. The Bland-Altman plot analysis revealed that the two methods showed an accepted degree of agreement for blood flow, and, thus, can be used interchangeably for measuring this parameter. Slightly worse agreement was observed between v1 in the DP model and BV but even here the two tracer kinetic analyses can be used interchangeably. Under consideration of whether both techniques may be used interchangeably was the case of t1 and MTT, as well as for measurements of the PS values. The application of the proposed DP model is feasible in the clinical routine and it can be used interchangeably for measuring

  8. Dynamic contrast-enhanced CT of head and neck tumors: perfusion measurements using a distributed-parameter tracer kinetic model. Initial results and comparison with deconvolution-based analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bisdas, Sotirios [Department of Diagnostic and Interventional Radiology, Johann Wolfgang GoeUniversity Hospital, 60590 Frankfurt (Germany); Konstantinou, George N [401 General Military Hospital, Athens (Greece); Lee, Puor Sherng [Department of Oncologic Imaging National Cancer Centre, 169610 Singapore (Singapore); Thng, Choon Hua [Department of Oncologic Imaging National Cancer Centre, 169610 Singapore (Singapore); Wagenblast, Jens [Department of Otorhinolaryngology, Johann Wolfgang GoeUniversity Hospital, 60590 Frankfurt (Germany); Baghi, Mehran [Department of Otorhinolaryngology, Johann Wolfgang GoeUniversity Hospital, 60590 Frankfurt (Germany); Koh, Tong San [Center for Modeling and Control of Complex Systems, Nanyang Technological University, 639798 Singapore (Singapore)

    2007-10-21

    The objective of this work was to evaluate the feasibility of a two-compartment distributed-parameter (DP) tracer kinetic model to generate functional images of several physiologic parameters from dynamic contrast-enhanced CT data obtained of patients with extracranial head and neck tumors and to compare the DP functional images to those obtained by deconvolution-based DCE-CT data analysis. We performed post-processing of DCE-CT studies, obtained from 15 patients with benign and malignant head and neck cancer. We introduced a DP model of the impulse residue function for a capillary-tissue exchange unit, which accounts for the processes of convective transport and capillary-tissue exchange. The calculated parametric maps represented blood flow (F), intravascular blood volume (v{sub 1}), extravascular extracellular blood volume (v{sub 2}), vascular transit time (t{sub 1}), permeability-surface area product (PS), transfer ratios k{sub 12} and k{sub 21}, and the fraction of extracted tracer (E). Based on the same regions of interest (ROI) analysis, we calculated the tumor blood flow (BF), blood volume (BV) and mean transit time (MTT) by using a modified deconvolution-based analysis taking into account the extravasation of the contrast agent for PS imaging. We compared the corresponding values by using Bland-Altman plot analysis. We outlined 73 ROIs including tumor sites, lymph nodes and normal tissue. The Bland-Altman plot analysis revealed that the two methods showed an accepted degree of agreement for blood flow, and, thus, can be used interchangeably for measuring this parameter. Slightly worse agreement was observed between v{sub 1} in the DP model and BV but even here the two tracer kinetic analyses can be used interchangeably. Under consideration of whether both techniques may be used interchangeably was the case of t{sub 1} and MTT, as well as for measurements of the PS values. The application of the proposed DP model is feasible in the clinical routine and it

  9. Probabilistic Model-Based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Andersen, Jakob; Prehn, Thomas

    2005-01-01

    manner. Bayesian propagation over time is used for proper model selection and tracking during model-based background subtraction. Bayes propagation is attractive in our application as it allows to deal with uncertainties during tracking. We have tested our approach on suitable outdoor video data....... is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  10. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced....... However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties...

  11. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble......’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent...

  12. Mathematical Existence Results for the Doi-Edwards Polymer Model

    Science.gov (United States)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  13. Comparison of NASCAP modelling results with lumped circuit analysis

    Science.gov (United States)

    Stang, D. B.; Purvis, C. K.

    1980-01-01

    Engineering design tools that can be used to predict the development of absolute and differential potentials by realistic spacecraft under geomagnetic substorm conditions are described. Two types of analyses are in use: (1) the NASCAP code, which computes quasistatic charging of geometrically complex objects with multiple surface materials in three dimensions; (2) lumped element equivalent circuit models that are used for analyses of particular spacecraft. The equivalent circuit models require very little computation time, however, they cannot account for effects, such as the formation of potential barriers, that are inherently multidimensional. Steady state potentials of structure and insulation are compared with those resulting from the equivalent circuit model.

  14. The East model: recent results and new progresses

    CERN Document Server

    Faggionato, Alessandra; Roberto, Cyril; Toninelli, Cristina

    2012-01-01

    The East model is a particular one dimensional interacting particle system in which certain transitions are forbidden according to some constraints depending on the configuration of the system. As such it has received particular attention in the physics literature as a special case of a more general class of systems referred to as kinetically constrained models, which play a key role in explaining some features of the dynamics of glasses. In this paper we give an extensive overview of recent rigorous results concerning the equilibrium and non-equilibrium dynamics of the East model together with some new improvements.

  15. Constraining hybrid inflation models with WMAP three-year results

    CERN Document Server

    Cardoso, A

    2006-01-01

    We reconsider the original model of quadratic hybrid inflation in light of the WMAP three-year results and study the possibility of obtaining a spectral index of primordial density perturbations, $n_s$, smaller than one from this model. The original hybrid inflation model naturally predicts $n_s\\geq1$ in the false vacuum dominated regime but it is also possible to have $n_s<1$ when the quadratic term dominates. We therefore investigate whether there is also an intermediate regime compatible with the latest constraints, where the scalar field value during the last 50 e-folds of inflation is less than the Planck scale.

  16. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo;

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....

  17. Some vaccination strategies for the SEIR epidemic model. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    This paper presents a vaccination-based control strategy for a SEIR (susceptible plus infected plus infectious plus removed populations) propagation disease model. The model takes into account the total population amounts as a refrain for the illness transmission since its increase makes more difficult contacts among susceptible and infected. The control objective is the asymptotically tracking of the removed-by-immunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically tend to zero.

  18. Results from Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, K.

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  19. Results From Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  20. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  1. Sketch-based Interfaces and Modeling

    CERN Document Server

    Jorge, Joaquim

    2011-01-01

    The field of sketch-based interfaces and modeling (SBIM) is concerned with developing methods and techniques to enable users to interact with a computer through sketching - a simple, yet highly expressive medium. SBIM blends concepts from computer graphics, human-computer interaction, artificial intelligence, and machine learning. Recent improvements in hardware, coupled with new machine learning techniques for more accurate recognition, and more robust depth inferencing techniques for sketch-based modeling, have resulted in an explosion of both sketch-based interfaces and pen-based computing

  2. CONSIDERATION OF RECOMMENDATIONS AT INNOVATION-BASED PROJECTS RESULTS FORECASTING

    OpenAIRE

    Argov Nikita Vladimirovich

    2012-01-01

    The purpose of this paper is to highlight the importance of considering the factor of word-of-mouth communications between clients when analyzing the innovation-based projects. The paper offers the methodology of evaluating the importance of such analyses for different innovative projects. Results of the research are in the specifying the demand forecasting at innovation-based projects outcomes forecasting. Practical implications lie at the evaluation of such projects, specially by small and ...

  3. CONSIDERATION OF RECOMMENDATIONS AT INNOVATION-BASED PROJECTS RESULTS FORECASTING

    OpenAIRE

    Argov Nikita Vladimirovich

    2012-01-01

    The purpose of this paper is to highlight the importance of considering the factor of word-of-mouth communications between clients when analyzing the innovation-based projects. The paper offers the methodology of evaluating the importance of such analyses for different innovative projects. Results of the research are in the specifying the demand forecasting at innovation-based projects outcomes forecasting. Practical implications lie at the evaluation of such projects, specially by small and ...

  4. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions.

    Science.gov (United States)

    Collins, Maxwell D; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence -the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages.

  5. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  6. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  7. Reply: New results justify open discussion of alternative models

    Science.gov (United States)

    Newman, Andrew; Stein, Seth; Weber, John; Engeln, Joseph; Mao, Aitlin; Dixon, Timothy

    A millennium ago, Jewish sages wrote that “the rivalry of scholars increases wisdom.” In contrast, Schweig et al. (Eos, this issue) demand that “great caution” be exercised in discussing alternatives to their model of high seismic hazard in the New Madrid seismic zone (NMSZ). We find this view surprising; we have no objection to their and their coworkers' extensive efforts promoting their model in a wide variety of public media, but see no reason not to explore a lower-hazard alternative based on both new data and reanalysis of data previously used to justify their model. In our view, the very purpose of collecting new data and reassessing existing data is to promote spirited testing and improvement of existing hypotheses. For New Madrid, such open reexamination seems scientifically appropriate, given the challenge of understanding intraplate earthquakes, and socially desirable because of the public policy implications.

  8. Design of an impact evaluation using a mixed methods model--an explanatory assessment of the effects of results-based financing mechanisms on maternal healthcare services in Malawi.

    Science.gov (United States)

    Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela

    2014-04-22

    In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements

  9. Bayesian Network Based XP Process Modelling

    Directory of Open Access Journals (Sweden)

    Mohamed Abouelela

    2010-07-01

    Full Text Available A Bayesian Network based mathematical model has been used for modelling Extreme Programmingsoftware development process. The model is capable of predicting the expected finish time and theexpected defect rate for each XP release. Therefore, it can be used to determine the success/failure of anyXP Project. The model takes into account the effect of three XP practices, namely: Pair Programming,Test Driven Development and Onsite Customer practices. The model’s predictions were validated againsttwo case studies. Results show the precision of our model especially in predicting the project finish time.

  10. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  11. Wave-current interactions: model development and preliminary results

    Science.gov (United States)

    Mayet, Clement; Lyard, Florent; Ardhuin, Fabrice

    2013-04-01

    The coastal area concentrates many uses that require integrated management based on diagnostic and predictive tools to understand and anticipate the future of pollution from land or sea, and learn more about natural hazards at sea or activity on the coast. The realistic modelling of coastal hydrodynamics needs to take into account various processes which interact, including tides, surges, and sea state (Wolf [2008]). These processes act at different spatial scales. Unstructured-grid models have shown the ability to satisfy these needs, given that a good mesh resolution criterion is used. We worked on adding a sea state forcing in a hydrodynamic circulation model. The sea state model is the unstructured version of WAVEWATCH III c (Tolman [2008]) (which version is developed at IFREMER, Brest (Ardhuin et al. [2010]) ), and the hydrodynamic model is the 2D barotropic module of the unstructured-grid finite element model T-UGOm (Le Bars et al. [2010]). We chose to use the radiation stress approach (Longuet-Higgins and Stewart [1964]) to represent the effect of surface waves (wind waves and swell) in the barotropic model, as previously done by Mastenbroek et al. [1993]and others. We present here some validation of the model against academic cases : a 2D plane beach (Haas and Warner [2009]) and a simple bathymetric step with analytic solution for waves (Ardhuin et al. [2008]). In a second part we present realistic application in the Ushant Sea during extreme event. References Ardhuin, F., N. Rascle, and K. Belibassakis, Explicit wave-averaged primitive equations using a generalized Lagrangian mean, Ocean Modelling, 20 (1), 35-60, doi:10.1016/j.ocemod.2007.07.001, 2008. Ardhuin, F., et al., Semiempirical Dissipation Source Functions for Ocean Waves. Part I: Definition, Calibration, and Validation, J. Phys. Oceanogr., 40 (9), 1917-1941, doi:10.1175/2010JPO4324.1, 2010. Haas, K. A., and J. C. Warner, Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and

  12. Exact results for car accidents in a traffic model

    Science.gov (United States)

    Huang, Ding-wei

    1998-07-01

    Within the framework of a recent model for car accidents on single-lane highway traffic, we study analytically the probability of the occurrence of car accidents. Exact results are obtained. Various scaling behaviours are observed. The linear dependence of the occurrence of car accidents on density is understood as the dominance of a single velocity in the distribution.

  13. The Result Integration Algorithm Based on Matching Strategy

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The following paper provides a new algorithm: a result integration algorithm based on matching strategy. The algorithm extracts the title and the abstract of Web pages, calculates the relevance between the query string and the Web pages, decides the Web pages accepted, rejected and sorts them out in user interfaces. The experiment results indicate obviously that the new algorithms improve the precision of meta-search engine. This technique is very useful to meta-search engine.

  14. Recent results in mirror based high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Nielsen, Jakob Skov; Elvang, Mads

    2004-01-01

    In this paper, recent results in high power laser cutting, obtained in reseach and development projects are presented. Two types of mirror based focussing systems for laser cutting have been developed and applied in laser cutting studies on CO2-lasers up to 12 kW. In shipyard environment cutting...

  15. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  16. Assessment of Galileo modal test results for mathematical model verification

    Science.gov (United States)

    Trubert, M.

    1984-01-01

    The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.

  17. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results...

  18. New global ICT-based business models

    DEFF Research Database (Denmark)

    . Contents: The Theoretical History and Background of Business Models The Th eoretical Background of Business Model Innovation ICT - a Key Enabler in Innovating New Global Business Models The NEWGIBM Research Methodology The Analytical Model for NEWGIBM Industry Service - Technology Centre The KMD Case Smart...... House Case The Nano Solar Case The Master Cat Case The Pitfalls Of The Blue Ocean Strategy - Implications Of "The Six Paths Framework" Network-Based Innovation - Combining Exploration and Exploitation? Innovating New Business Models in Inter-firm Collaboration NEW Global Business Models - What Did......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative...

  19. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  20. Modeling vertical loads in pools resulting from fluid injection. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-06-15

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.

  1. Distributed Prognostics based on Structural Model Decomposition

    Science.gov (United States)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  2. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  3. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  4. Some Results On The Modelling Of TSS Manufacturing Lines

    Directory of Open Access Journals (Sweden)

    Viorel MÎNZU

    2000-12-01

    Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.

  5. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... for factors with differences in number of levels. For mixed models, where in general the relevant error terms for the fixed effects are not the pure residual error, it is suggested to base the d-prime-like interpretation on the residual error. The methods are illustrated on a multifactorial sensory profile...... inherently challenging effect size measure estimates in ANOVA settings....

  6. Simulating lightning into the RAMS model: implementation and preliminary results

    Directory of Open Access Journals (Sweden)

    S. Federico

    2014-05-01

    Full Text Available This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS. The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  7. Modeling air quality over China: Results from the Panda project

    Science.gov (United States)

    Katinka Petersen, Anna; Bouarar, Idir; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Lili; Wang, Xuemei

    2015-04-01

    China faces strong air pollution problems related to rapid economic development in the past decade and increasing demand for energy. Air quality monitoring stations often report high levels of particle matter and ozone all over the country. Knowing its long-term health impacts, air pollution became then a pressing problem not only in China but also in other Asian countries. The PANDA project is a result of cooperation between scientists from Europe and China who joined their efforts for a better understanding of the processes controlling air pollution in China, improve methods for monitoring air quality and elaborate indicators in support of European and Chinese policies. A modeling system of air pollution is being setup within the PANDA project and include advanced global (MACC, EMEP) and regional (WRF-Chem, EMEP) meteorological and chemical models to analyze and monitor air quality in China. The poster describes the accomplishments obtained within the first year of the project. Model simulations for January and July 2010 are evaluated with satellite measurements (SCIAMACHY NO2 and MOPITT CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) observed at several surface stations in China. Using the WRF-Chem model, we investigate the sensitivity of the model performance to emissions (MACCity, HTAPv2), horizontal resolution (60km, 20km) and choice of initial and boundary conditions.

  8. PDF-based heterogeneous multiscale filtration model.

    Science.gov (United States)

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  9. Exact results for the one dimensional asymmetric exclusion model

    Science.gov (United States)

    Derrida, B.; Evans, M. R.; Hakim, V.; Pasquier, V.

    1993-11-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices.

  10. Exact results for the one dimensional asymmetric exclusion model

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Evans, M.R.; Pasquier, V. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Service de Physique Theorique; Hakim, V. [Ecole Normale Superieure, 75 - Paris (France)

    1993-12-31

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author).

  11. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  12. Analytical results for a three-phase traffic model.

    Science.gov (United States)

    Huang, Ding-wei

    2003-10-01

    We study analytically a cellular automaton model, which is able to present three different traffic phases on a homogeneous highway. The characteristics displayed in the fundamental diagram can be well discerned by analyzing the evolution of density configurations. Analytical expressions for the traffic flow and shock speed are obtained. The synchronized flow in the intermediate-density region is the result of aggressive driving scheme and determined mainly by the stochastic noise.

  13. Recent results in mirror based high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Nielsen, Jakob Skov; Elvang, Mads

    2004-01-01

    In this paper, recent results in high power laser cutting, obtained in reseach and development projects are presented. Two types of mirror based focussing systems for laser cutting have been developed and applied in laser cutting studies on CO2-lasers up to 12 kW. In shipyard environment cutting...... speed increase relative to state-of-the-art cutting of over 100 % has been achieved....

  14. The Integrated Fuzzy AHP and Goal Programing Model Based on LCA Results for Industrial Waste Management by Using the Nearest Weighted Approximation of FN: Aluminum Industry in Arak, Iran

    Directory of Open Access Journals (Sweden)

    Ramin Zare

    2016-01-01

    Full Text Available The worldwide recycled aluminum generation is increasing quickly thanks to the environmental considerations and continuous growing of use demands. Aluminum dross recycling, as the secondary aluminum process, has been always considered as a problematic issue in the world. The aim of this work is to propose a methodical and easy procedure for the proposed system selection as the MCDM problem. Here, an evaluation method, integrated FAHP, is presented to evaluate aluminum waste management systems. Therefore, we drive weights of each pair comparison matrix by the use of the goal programming (GP model. The functional unit includes aluminum dross and aluminum scrap, which is defined as 1000 kilograms. The model is confirmed in the case of aluminum waste management in Arak. For the proposed integrated fuzzy AHP model, five alternatives are investigated. The results showed that, according to the selected attributes, the best waste management alternative is the one involving the primary aluminum ingot 99.5% including 200 kg and the secondary aluminum 98% (scrap including 800 kg, and beneficiation activities are implemented, duplicate aluminum dross is recycled in the plant, and finally it is landfilled.

  15. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  16. Multiscale agent-based consumer market modeling.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Macal, C. M.; St. Aubin, J.; Thimmapuram, P.; Bragen, M.; Hahn, J.; Karr, J.; Brigham, N.; Lacy, M. E.; Hampton, D.; Decision and Information Sciences; Procter & Gamble Co.

    2010-05-01

    Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression-based models, logit models, and theoretical market-level models, such as the NBD-Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method - agent-based modeling - shows promise for addressing these issues. Agent-based models use business-driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system-level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent-based modeling to develop a multi-scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings.

  17. The impacts of mantle phase transitions and the iron spin crossover in ferropericlase on convective mixing—is the evidence for compositional convection definitive? New results from a Yin-Yang overset grid-based control volume model

    Science.gov (United States)

    Shahnas, M. H.; Peltier, W. R.

    2015-08-01

    High-resolution seismic tomographic images from several subduction zones provide evidence for the inhibition of the downwelling of subducting slabs at the level of the 660 km depth seismic discontinuity. Furthermore, the inference of old (~140 Myr) sinking slabs below fossil subduction zones in the lower mantle has yet to be explained. We employ a control volume methodology to develop a new anelastically compressible model of three-dimensional thermal convection in the "mantle" of a terrestrial planet that fully incorporates the influence of large variations in material properties. The model also incorporates the influence of (1) multiple solid-solid pressure-induced phase transitions, (2) transformational superplasticity at 660 km depth, and (3) the high spin-low spin iron spin transition in ferropericlase at midmantle pressures. The message passing interface-parallelized code is successfully tested against previously published benchmark results. The high-resolution control volume models exhibit the same degree of radial layering as previously shown to be characteristic of otherwise identical 2-D axisymmetric spherical models. The layering is enhanced by the presence of moderate transformational superplasticity, and in the presence of the spin crossover in ferropericlase, stagnation of cold downwellings occurs in the range of spin crossover depths (~1700 km). Although this electronic spin transition has been suggested to be invisible seismically, recent high-pressure ab initio calculations suggest it to have a clear signature in body wave velocities which could provide an isochemical explanation of a seismological signature involving the onset of decorrelation between Vp and Vs that has come to be interpreted as requiring compositional layering.

  18. New DNS and modeling results for turbulent pipe flow

    Science.gov (United States)

    Johansson, Arne; El Khoury, George; Grundestam, Olof; Schlatter, Philipp; Brethouwer, Geert; Linne Flow Centre Team

    2013-11-01

    The near-wall region of turbulent pipe and channel flows (as well as zero-pressure gradient boundary layers) have been shown to exhibit a very high degree of similarity in terms of all statistical moments and many other features, while even the mean velocity profile in the two cases exhibits significant differences between in the outer region. The wake part of the profile, i.e. the deviation from the log-law, in the outer region is of substantially larger amplitude in pipe flow as compared to channel flow (although weaker than in boundary layer flow). This intriguing feature has been well known but has no simple explanation. Model predictions typically give identical results for the two flows. We have analyzed a new set of DNS for pipe and channel flows (el Khoury et al. 2013, Flow, Turbulence and Combustion) for friction Reynolds numbers up to 1000 and made comparing calculations with differential Reynolds stress models (DRSM). We have strong indications that the key factor behind the difference in mean velocity in the outer region can be coupled to differences in the turbulent diffusion in this region. This is also supported by DRSM results, where interesting differences are seen depending on the sophistication of modeling the turbulent diffusion coefficient.

  19. Some Results on Optimal Dividend Problem in Two Risk Models

    Directory of Open Access Journals (Sweden)

    Shuaiqi Zhang

    2010-12-01

    Full Text Available The compound Poisson risk model and the compound Poisson risk model perturbed by diffusion are considered in the presence of a dividend barrier with solvency constraints. Moreover, it extends the known result due to [1]. Ref. [1] finds the optimal dividend policy is of a barrier type for a jump-diffusion model with exponentially distributed jumps. In this paper, it turns out that there can be two different solutions depending on the model’s parameters. Furthermore, an interesting result is given: the proportional transaction cost has no effect on the dividend barrier. The objective of the corporation is to maximize the cumulative expected discounted dividends payout with solvency constraints before the time of ruin. It is well known that under some reasonable assumptions, optimal dividend strategy is a barrier strategy, i.e., there is a level b_{1}(b_{2} so that whenever surplus goes above the level b_{1}(b_{2}, the excess is paid out as dividends. However, the optimal level b_{1}(b_{2} may be unacceptably low from a solvency point of view. Therefore, some constraints should imposed on an insurance company such as to pay out dividends unless the surplus has reached a level b^{1}_{c}>b_{1}(b^2_{c}>b_{2} . We show that in this case a barrier strategy at b^{1}_{c}(b^2_{c} is optimal.

  20. Modeling results for the ITER cryogenic fore pump

    Science.gov (United States)

    Zhang, D. S.; Miller, F. K.; Pfotenhauer, J. M.

    2014-01-01

    The cryogenic fore pump (CFP) is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. It is presented here a 1D numerical model. The results include comparison with the experimental data of pure hydrogen flow and a prediction for hydrogen flow with trace helium. An advanced 2D model and detailed explanation of the Group-Member technique are to be presented in following papers.

  1. Mineral resources estimation based on block modeling

    Science.gov (United States)

    Bargawa, Waterman Sulistyana; Amri, Nur Ali

    2016-02-01

    The estimation in this paper uses three kinds of block models of nearest neighbor polygon, inverse distance squared and ordinary kriging. The techniques are weighting scheme which is based on the principle that block content is a linear combination of the grade data or the sample around the block being estimated. The case study in Pongkor area, here is gold-silver resource modeling that allegedly shaped of quartz vein as a hydrothermal process of epithermal type. Resources modeling includes of data entry, statistical and variography analysis of topography and geological model, the block model construction, estimation parameter, presentation model and tabulation of mineral resources. Skewed distribution, here isolated by robust semivariogram. The mineral resources classification generated in this model based on an analysis of the kriging standard deviation and number of samples which are used in the estimation of each block. Research results are used to evaluate the performance of OK and IDS estimator. Based on the visual and statistical analysis, concluded that the model of OK gives the estimation closer to the data used for modeling.

  2. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  3. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  4. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  5. Test results judgment method based on BIT faults

    Institute of Scientific and Technical Information of China (English)

    Wang Gang; Qiu Jing; Liu Guanjun; Lyu Kehong

    2015-01-01

    Built-in-test (BIT) is responsible for equipment fault detection, so the test data correct-ness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE) failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integra-tion, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  6. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  7. ITER CS Model Coil and CS Insert Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N; Michael, P; Minervina, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M

    2000-09-07

    The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.

  8. Model independent analysis of dark energy I: Supernova fitting result

    CERN Document Server

    Gong, Y

    2004-01-01

    The nature of dark energy is a mystery to us. This paper uses the supernova data to explore the property of dark energy by some model independent methods. We first Talyor expanded the scale factor $a(t)$ to find out the deceleration parameter $q_0<0$. This result just invokes the Robertson-Walker metric. Then we discuss several different parameterizations used in the literature. We find that $\\Omega_{\\rm DE0}$ is almost less than -1 at $1\\sigma$ level. We also find that the transition redshift from deceleration phase to acceleration phase is $z_{\\rm T}\\sim 0.3$.

  9. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Luk, V.K.; Hessheimer, M.F. [Sandia National Labs., Albuquerque, NM (United States); Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan); Costello, J.F. [Nuclear Regulatory Commission, Washington, DC (United States)

    1998-04-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  10. The physical model of a terraced plot: first results

    Science.gov (United States)

    Perlotto, Chiara; D'Agostino, Vincenzo; Buzzanca, Giacomo

    2017-04-01

    Terrace building have been expanded in the 19th century because of the increased demographic pressure and the need to crop additional areas at steeper slopes. Terraces are also important to regulate the hydrological behavior of the hillslope. Few studies are available in literature on rainfall-runoff processes and flood risk mitigation in terraced areas. Bench terraces, reducing the terrain slope and the length of the overland flow, quantitatively control the runoff flow velocity, facilitating the drainage and thus leading to a reduction of soil erosion. The study of the hydrologic-hydraulic function of terraced slopes is essential in order to evaluate their possible use to cooperate for flood-risk mitigation also preserving the landscape value. This research aims to better focus the times of the hydrological response, which are determined by a hillslope plot bounded by a dry-stone wall, considering both the overland flow and the groundwater. A physical model, characterized by a quasi-real scale, has been built to reproduce the behavior of a 3% outward sloped terrace at bare soil condition. The model consists of a steel metal box (1 m large, 3.3 m long, 2 m high) containing the hillslope terrain. The terrain is equipped with two piezometers, 9 TDR sensors measuring the volumetric water content, a surface spillway at the head releasing the steady discharge under test, a scale at the wall base to measure the outflowing discharge. The experiments deal with different initial moisture condition (non-saturated and saturated), and discharges of 19.5, 12.0 and 5.0 l/min. Each experiment has been replicated, conducting a total number of 12 tests. The volumetric water content analysis produced by the 9 TDR sensors was able to provide a quite satisfactory representation of the soil moisture during the runs. Then, different lag times at the outlet since the inflow initiation were measured both for runoff and groundwater. Moreover, the time of depletion and the piezometer

  11. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  12. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  13. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  14. An Agent Based Classification Model

    CERN Document Server

    Gu, Feng; Greensmith, Julie

    2009-01-01

    The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to prob- lem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifi cally for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based mod- elling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environ- ...

  15. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    Science.gov (United States)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.

  16. Subsea Permafrost Climate Modeling - Challenges and First Results

    Science.gov (United States)

    Rodehacke, C. B.; Stendel, M.; Marchenko, S. S.; Christensen, J. H.; Romanovsky, V. E.; Nicolsky, D.

    2015-12-01

    Recent observations indicate that the East Siberian Arctic Shelf (ESAS) releases methane, which stems from shallow hydrate seabed reservoirs. The total amount of carbon within the ESAS is so large that release of only a small fraction, for example via taliks, which are columns of unfrozen sediment within the permafrost, could impact distinctly the global climate. Therefore it is crucial to simulate the future fate of ESAS' subsea permafrost with regard to changing atmospheric and oceanic conditions. However only very few attempts to address the vulnerability of subsea permafrost have been made, instead most studies have focused on the evolution of permafrost since the Late Pleistocene ocean transgression, approximately 14000 years ago.In contrast to land permafrost modeling, any attempt to model the future fate of subsea permafrost needs to consider several additional factors, in particular the dependence of freezing temperature on water depth and salt content and the differences in ground heat flux depending on the seabed properties. Also the amount of unfrozen water in the sediment needs to be taken into account. Using a system of coupled ocean, atmosphere and permafrost models will allow us to capture the complexity of the different parts of the system and evaluate the relative importance of different processes. Here we present the first results of a novel approach by means of dedicated permafrost model simulations. These have been driven by conditions of the Laptev Sea region in East Siberia. By exploiting the ensemble approach, we will show how uncertainties in boundary conditions and applied forcing scenarios control the future fate of the sub sea permafrost.

  17. DARK STARS: IMPROVED MODELS AND FIRST PULSATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Rindler-Daller, T.; Freese, K. [Department of Physics and Michigan Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Montgomery, M. H.; Winget, D. E. [Department of Astronomy, McDonald Observatory and Texas Cosmology Center, University of Texas, Austin, TX 78712 (United States); Paxton, B. [Kavli Insitute for Theoretical Physics, University of California, Santa Barbara, CA 93106 (United States)

    2015-02-01

    We use the stellar evolution code MESA to study dark stars (DSs). DSs, which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the universe. We compute stellar models for accreting DSs with masses up to 10{sup 6} M {sub ☉}. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with previous models, which assumed polytropic interiors. There are some differences in the details, with positive implications for observability. We found that, in the mass range of 10{sup 4}-10{sup 5} M {sub ☉}, our DSs are hotter by a factor of 1.5 than those in Freese et al., are smaller in radius by a factor of 0.6, denser by a factor of three to four, and more luminous by a factor of two. Our models also confirm previous results, according to which supermassive DSs are very well approximated by (n = 3)-polytropes. We also perform a first study of DS pulsations. Our DS models have pulsation modes with timescales ranging from less than a day to more than two years in their rest frames, at z ∼ 15, depending on DM particle mass and overtone number. Such pulsations may someday be used to identify bright, cool objects uniquely as DSs; if properly calibrated, they might, in principle, also supply novel standard candles for cosmological studies.

  18. Convergence results for a coarsening model using global linearization

    CERN Document Server

    Gallay, T; Gallay, Th.

    2002-01-01

    We study a coarsening model describing the dynamics of interfaces in the one-dimensional Allen-Cahn equation. Given a partition of the real line into intervals of length greater than one, the model consists in constantly eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can be explicitly solved using a global linearization transformation. This allows us to derive rigorous results on the long-time asymptotics of the solutions. If the average length of the intervals is finite, we prove that all distributions approach a uniquely determined self-similar solution. We also obtain global stability results for the family of self-similar profiles which correspond to distributions with infinite expectation. eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can...

  19. A multivalued knowledge-base model

    CERN Document Server

    Achs, Agnes

    2010-01-01

    The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these ideas the concept of multivalued knowledge-base will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At last a possible evaluation strategy is given.

  20. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  1. Incident duration modeling using flexible parametric hazard-based models.

    Science.gov (United States)

    Li, Ruimin; Shang, Pan

    2014-01-01

    Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  2. Spatial interactions in agent-based modeling

    CERN Document Server

    Ausloos, Marcel; Merlone, Ugo

    2014-01-01

    Agent Based Modeling (ABM) has become a widespread approach to model complex interactions. In this chapter after briefly summarizing some features of ABM the different approaches in modeling spatial interactions are discussed. It is stressed that agents can interact either indirectly through a shared environment and/or directly with each other. In such an approach, higher-order variables such as commodity prices, population dynamics or even institutions, are not exogenously specified but instead are seen as the results of interactions. It is highlighted in the chapter that the understanding of patterns emerging from such spatial interaction between agents is a key problem as much as their description through analytical or simulation means. The chapter reviews different approaches for modeling agents' behavior, taking into account either explicit spatial (lattice based) structures or networks. Some emphasis is placed on recent ABM as applied to the description of the dynamics of the geographical distribution o...

  3. Cardiometabolic results from an armband-based weight loss trial

    Directory of Open Access Journals (Sweden)

    Sieverdes JC

    2011-05-01

    Full Text Available John C Sieverdes, Xuemei Sui, Gregory A Hand, Vaughn W Barry, Sara Wilcox, Rebecca A Meriwether, James W Hardin, Amanda C McClain, Steven N BlairDepartment of Exercise Science, University of South Carolina, Columbia, SC, USAPurpose: This report examines the blood chemistry and blood pressure (BP results from the Lifestyle Education for Activity and Nutrition (LEAN study, a randomized weight loss trial. A primary purpose of the study was to evaluate the effects of real-time self-monitoring of energy balance (using the SenseWearTM Armband, BodyMedia, Inc Pittsburgh, PA on these health factors.Methods: 164 sedentary overweight or obese adults (46.8 ± 10.8 years; BMI 33.3 ± 5.2 kg/m2; 80% women took part in the 9-month study. Participants were randomized into 4 conditions: a standard care condition with an evidence-based weight loss manual (n = 40, a group-based behavioral weight loss program (n = 44, an armband alone condition (n = 41, and a group plus armband (n = 39 condition. BP, fasting blood lipids and glucose were measured at baseline and 9 months.Results: 99 participants (60% completed both baseline and follow-up measurements for BP and blood chemistry analysis. Missing data were handled by baseline carried forward. None of the intervention groups had significant changes in blood lipids or BP when compared to standard care after adjustment for covariates, though within-group lowering was found for systolic BP in group and group + armband conditions, a rise in total cholesterol and LDL were found in standard care and group conditions, and a lowering of triglycerides was found in the two armband conditions. Compared with the standard care condition, fasting glucose decreased significantly for participants in the group, armband, and group + armband conditions (all P < 0.05, respectively.Conclusion: Our results suggest that using an armband program is an effective strategy to decrease fasting blood glucose. This indicates that devices, such as

  4. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  5. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  6. Dark Stars: Improved Models and First Pulsation Results

    CERN Document Server

    Rindler-Daller, Tanja; Freese, Katherine; Winget, Donald E; Paxton, Bill

    2014-01-01

    (Abridged) We use the stellar evolution code MESA to study dark stars. Dark stars (DSs), which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the Universe. We compute stellar models for accreting DSs with masses up to 10^6 M_sun. While previous calculations were limited to polytropic interiors, our current calculations use MESA, a modern stellar evolution code to solve the equations of stellar structure. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with the basic results of previous models. There are some differences, however, in the details, with positive implications for observability of DSs. We found that, in the mass range of 10^4 - 10^5 M_sun, using MESA, our DSs are hotter by a factor of 1.5 than those in Freese et al.(2010), are smaller in radius by a factor of 0.6, denser by a factor of 3...

  7. MODELING RESULTS FROM CESIUM ION EXCHANGE PROCESSING WITH SPHERICAL RESINS

    Energy Technology Data Exchange (ETDEWEB)

    Nash, C.; Hang, T.; Aleman, S.

    2011-01-03

    Ion exchange modeling was conducted at the Savannah River National Laboratory to compare the performance of two organic resins in support of Small Column Ion Exchange (SCIX). In-tank ion exchange (IX) columns are being considered for cesium removal at Hanford and the Savannah River Site (SRS). The spherical forms of resorcinol formaldehyde ion exchange resin (sRF) as well as a hypothetical spherical SuperLig{reg_sign} 644 (SL644) are evaluated for decontamination of dissolved saltcake wastes (supernates). Both SuperLig{reg_sign} and resorcinol formaldehyde resin beds can exhibit hydraulic problems in their granular (nonspherical) forms. SRS waste is generally lower in potassium and organic components than Hanford waste. Using VERSE-LC Version 7.8 along with the cesium Freundlich/Langmuir isotherms to simulate the waste decontamination in ion exchange columns, spherical SL644 was found to reduce column cycling by 50% for high-potassium supernates, but sRF performed equally well for the lowest-potassium feeds. Reduced cycling results in reduction of nitric acid (resin elution) and sodium addition (resin regeneration), therefore, significantly reducing life-cycle operational costs. These findings motivate the development of a spherical form of SL644. This work demonstrates the versatility of the ion exchange modeling to study the effects of resin characteristics on processing cycles, rates, and cold chemical consumption. The value of a resin with increased selectivity for cesium over potassium can be assessed for further development.

  8. Results-Based Organization Design for Technology Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Chris McPhee

    2012-05-01

    Full Text Available Faced with considerable uncertainty, entrepreneurs would benefit from clearly defined objectives, a plan to achieve these objectives (including a reasonable expectation that this plan will work, as well as a means to measure progress and make requisite course corrections. In this article, the author combines the benefits of results-based management with the benefits of organization design to describe a practical approach that technology entrepreneurs can use to design their organizations so that they deliver desired outcomes. This approach links insights from theory and practice, builds logical connections between entrepreneurial activities and desired outcomes, and measures progress toward those outcomes. This approach also provides a mechanism for entrepreneurs to make continual adjustments and improvements to their design and direction in response to data, customer and stakeholder feedback, and changes in their business environment.

  9. Modeling Leaves Based on Real Image

    Institute of Scientific and Technical Information of China (English)

    CAO Yu-kun; LI Yun-feng; ZHU Qing-sheng; LIU Yin-bin

    2004-01-01

    Plants have complex structures. The shape of a plant component is vital for capturing the characteristics of a species. One of the challenges in computer graphics is to create geometry of objects in an intuitive and direct way while allowing interactive manipulation of the resulting shapes. In this paper,an interactive method for modeling leaves based on real image is proposed using biological data for individual plants. The modeling process begins with a one-dimensional analogue of implicit surfaces,from which a 2D silhouette of a leaf is generated based on image segmentation. The silhouette skeleton is thus obtained. Feature parameters of the leaf are extracted based on biologically experimental data, and the obtained leaf structure is then modified by comparing the synthetic result with the real leaf so as to make the leaf structure more realistic. Finally, the leaf mesh is constructed by sweeps.

  10. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  11. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  12. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  13. Framework of Pattern Recognition Model Based on the Cognitive Psychology

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    According to the fundamental theory of visual cognition mechanism and cognitive psychology,the visual pattern recognition model is introduced briefly.Three pattern recognition models,i.e.template-based matching model,prototype-based matching model and feature-based matching model are built and discussed separately.In addition,the influence of object background information and visual focus point to the result of pattern recognition is also discussed with the example of recognition for fuzzy letters and figures.

  14. Parallel Path Magnet Motor: Development of the Theoretical Model and Analysis of Experimental Results

    Science.gov (United States)

    Dirba, I.; Kleperis, J.

    2011-01-01

    Analytical and numerical modelling is performed for the linear actuator of a parallel path magnet motor. In the model based on finite-element analysis, the 3D problem is reduced to a 2D problem, which is sufficiently precise in a design aspect and allows modelling the principle of a parallel path motor. The paper also describes a relevant numerical model and gives comparison with experimental results. The numerical model includes all geometrical and physical characteristics of the motor components. The magnetic flux density and magnetic force are simulated using FEMM 4.2 software. An experimental model has also been developed and verified for the core of switchable magnetic flux linear actuator and motor. The results of experiments are compared with those of theoretical/analytical and numerical modelling.

  15. Research on BOM based composable modeling method

    NARCIS (Netherlands)

    Zhang, M.; He, Q.; Gong, J.

    2013-01-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling m

  16. Control and Analysis of Costs Based On Results Account of the ABC method

    Directory of Open Access Journals (Sweden)

    Sorinel Capusneanu

    2011-10-01

    Full Text Available This paper presents a modality of control and cost analysis based on the results account of Activity-Based Costing method. The results account model and situations for determining deviations are presented based on the purpose, composition and classification in the existing literature. There are presented the statements for determining costs deviations resulting from their control and analysis in terms of pilot indicators, all highlighted by a case study application. The article ends with the authors' conclusions about the advantages of these syntheses accounting statements specific to ABC method and using it as a main source of rapid and accurate decision-making for the management at interentity level.

  17. Reaction-contingency based bipartite Boolean modelling

    Science.gov (United States)

    2013-01-01

    Background Intracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts. Results Here, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description. Conclusions Taken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation. PMID:23835289

  18. Mouse Model of Neurological Complications Resulting from Encephalitic Alphavirus Infection

    Science.gov (United States)

    Ronca, Shannon E.; Smith, Jeanon; Koma, Takaaki; Miller, Magda M.; Yun, Nadezhda; Dineley, Kelly T.; Paessler, Slobodan

    2017-01-01

    Long-term neurological complications, termed sequelae, can result from viral encephalitis, which are not well understood. In human survivors, alphavirus encephalitis can cause severe neurobehavioral changes, in the most extreme cases, a schizophrenic-like syndrome. In the present study, we aimed to adapt an animal model of alphavirus infection survival to study the development of these long-term neurological complications. Upon low-dose infection of wild-type C57B/6 mice, asymptomatic and symptomatic groups were established and compared to mock-infected mice to measure general health and baseline neurological function, including the acoustic startle response and prepulse inhibition paradigm. Prepulse inhibition is a robust operational measure of sensorimotor gating, a fundamental form of information processing. Deficits in prepulse inhibition manifest as the inability to filter out extraneous sensory stimuli. Sensory gating is disrupted in schizophrenia and other mental disorders, as well as neurodegenerative diseases. Symptomatic mice developed deficits in prepulse inhibition that lasted through 6 months post infection; these deficits were absent in asymptomatic or mock-infected groups. Accompanying prepulse inhibition deficits, symptomatic animals exhibited thalamus damage as visualized with H&E staining, as well as increased GFAP expression in the posterior complex of the thalamus and dentate gyrus of the hippocampus. These histological changes and increased GFAP expression were absent in the asymptomatic and mock-infected animals, indicating that glial scarring could have contributed to the prepulse inhibition phenotype observed in the symptomatic animals. This model provides a tool to test mechanisms of and treatments for the neurological sequelae of viral encephalitis and begins to delineate potential explanations for the development of such sequelae post infection.

  19. Updating Finite Element Model of a Wind Turbine Blade Section Using Experimental Modal Analysis Results

    DEFF Research Database (Denmark)

    Luczak, Marcin; Manzato, Simone; Peeters, Bart;

    2014-01-01

    of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement...... is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations...... were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set...

  20. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  1. On the evaluation of box model results: the case of BOXURB model.

    Science.gov (United States)

    Paschalidou, A K; Kassomenos, P A

    2009-08-01

    In the present paper, the BOXURB model results, as they occurred in the Greater Area of Athens after model application on an hourly basis for the 10-year period 1995-2004, are evaluated both in time and space in the light of observed pollutant concentrations time series from 17 monitoring stations. The evaluation is performed at a total, monthly, daily and hourly scale. The analysis also includes evaluation of the model performance with regard to the meteorological parameters. Finally, the model is evaluated as an air quality forecasting and urban planning tool. Given the simplicity of the model and the complexity of the area topography, the model results are found to be in good agreement with the measured pollutant concentrations, especially in the heavy traffic stations. Therefore, the model can be used for regulatory purposes by authorities for time-efficient, simple and reliable estimation of air pollution levels within city boundaries.

  2. Comparison of Statistical Multifragmentation Model simulations with Canonical Thermodynamical Model results: a few representative cases

    CERN Document Server

    Botvina, A; Gupta, S Das; Mishustin, I

    2008-01-01

    The statistical multifragmentation model (SMM) has been widely used to explain experimental data of intermediate energy heavy ion collisions. A later entrant in the field is the canonical thermodynamic model (CTM) which is also being used to fit experimental data. The basic physics of both the models is the same, namely that fragments are produced according to their statistical weights in the available phase space. However, they are based on different statistical ensembles, and the methods of calculation are different: while the SMM uses Monte-Carlo simulations, the CTM solves recursion relations. In this paper we compare the predictions of the two models for a few representative cases.

  3. Semantic snippet construction for search engine results based on segment evaluation

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The result listing from search engines includes a link and a snippet from the web page for each result item. The snippet in the result listing plays a vital role in assisting the user to click on it. This paper proposes a novel approach to construct the snippets based on a semantic evaluation of the segments in the page. The target segment(s) is/are identified by applying a model to evaluate segments present in the page and selecting the segments with top scores. The proposed model makes the user judgment to click on a result item easier since the snippet is constructed semantically after a critical evaluation based on multiple factors. A prototype implementation of the proposed model confirms the empirical validation.

  4. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  5. Infrared thermography for CFRP inspection: computational model and experimental results

    Science.gov (United States)

    Fernandes, Henrique C.; Zhang, Hai; Morioka, Karen; Ibarra-Castanedo, Clemente; López, Fernando; Maldague, Xavier P. V.; Tarpani, José R.

    2016-05-01

    Infrared Thermography (IRT) is a well-known Non-destructive Testing (NDT) technique. In the last decades, it has been widely applied in several fields including inspection of composite materials (CM), specially the fiber-reinforced polymer matrix ones. Consequently, it is important to develop and improve efficient NDT techniques to inspect and assess the quality of CM parts in order to warranty airworthiness and, at the same time, reduce costs of airline companies. In this paper, active IRT is used to inspect carbon fiber-reinforced polymer (CFRP) at laminate with artificial inserts (built-in sample) placed on different layers prior to the manufacture. Two optical active IRT are used. The first is pulsed thermography (PT) which is the most widely utilized IRT technique. The second is a line-scan thermography (LST) technique: a dynamic technique, which can be employed for the inspection of materials by heating a component, line-by-line, while acquiring a series of thermograms with an infrared camera. It is especially suitable for inspection of large parts as well as complex shaped parts. A computational model developed using COMSOL Multiphysics® was used in order to simulate the inspections. Sequences obtained from PT and LST were processed using principal component thermography (PCT) for comparison. Results showed that it is possible to detect insertions of different sizes at different depths using both PT and LST IRT techniques.

  6. Spin-1 Ising model on tetrahedron recursive lattices: Exact results

    Science.gov (United States)

    Jurčišinová, E.; Jurčišin, M.

    2016-11-01

    We investigate the ferromagnetic spin-1 Ising model on the tetrahedron recursive lattices. An exact solution of the model is found in the framework of which it is shown that the critical temperatures of the second order phase transitions of the model are driven by a single equation simultaneously on all such lattices. It is also shown that this general equation for the critical temperatures is equivalent to the corresponding polynomial equation for the model on the tetrahedron recursive lattice with arbitrary given value of the coordination number. The explicit form of these polynomial equations is shown for the lattices with the coordination numbers z = 6, 9, and 12. In addition, it is shown that the thermodynamic properties of all possible physical phases of the model are also completely driven by the corresponding single equations simultaneously on all tetrahedron recursive lattices. In this respect, the spontaneous magnetization, the free energy, the entropy, and the specific heat of the model are studied in detail.

  7. Optimal pricing decision model based on activity-based costing

    Institute of Scientific and Technical Information of China (English)

    王福胜; 常庆芳

    2003-01-01

    In order to find out the applicability of the optimal pricing decision model based on conventional costbehavior model after activity-based costing has given strong shock to the conventional cost behavior model andits assumptions, detailed analyses have been made using the activity-based cost behavior and cost-volume-profitanalysis model, and it is concluded from these analyses that the theory behind the construction of optimal pri-cing decision model is still tenable under activity-based costing, but the conventional optimal pricing decisionmodel must be modified as appropriate to the activity-based costing based cost behavior model and cost-volume-profit analysis model, and an optimal pricing decision model is really a product pricing decision model construc-ted by following the economic principle of maximizing profit.

  8. Droplet Reaction and Evaporation of Agents Model (DREAM). Glass model results; Sand model plans

    NARCIS (Netherlands)

    Hin, A.R.T.

    2006-01-01

    The Agent Fate Program is generating an extensive set of quality agent fate data which is being used to develop highly accurate secondary evaporation predictive models. Models are being developed that cover a wide range of traditional chemical warfare agents deposited onto surfaces routinely found o

  9. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  10. Frequency response function-based model updating using Kriging model

    Science.gov (United States)

    Wang, J. T.; Wang, C. J.; Zhao, J. P.

    2017-03-01

    An acceleration frequency response function (FRF) based model updating method is presented in this paper, which introduces Kriging model as metamodel into the optimization process instead of iterating the finite element analysis directly. The Kriging model is taken as a fast running model that can reduce solving time and facilitate the application of intelligent algorithms in model updating. The training samples for Kriging model are generated by the design of experiment (DOE), whose response corresponds to the difference between experimental acceleration FRFs and its counterpart of finite element model (FEM) at selected frequency points. The boundary condition is taken into account, and a two-step DOE method is proposed for reducing the number of training samples. The first step is to select the design variables from the boundary condition, and the selected variables will be passed to the second step for generating the training samples. The optimization results of the design variables are taken as the updated values of the design variables to calibrate the FEM, and then the analytical FRFs tend to coincide with the experimental FRFs. The proposed method is performed successfully on a composite structure of honeycomb sandwich beam, after model updating, the analytical acceleration FRFs have a significant improvement to match the experimental data especially when the damping ratios are adjusted.

  11. Introducing Waqf Based Takaful Model in India

    Directory of Open Access Journals (Sweden)

    Syed Ahmed Salman

    2014-03-01

    Full Text Available Objective – Waqf is a unique feature of the socioeconomic system of Islam in a multi- religious and developing country like India. India is a rich country with waqf assets. The history of waqf in India can be traced back to 800 years ago. Most of the researchers, suggest how waqf can be used a tool to mitigate the poverty of Muslims. India has the third highest Muslim population after Indonesia and Pakistan. However, the majority of Muslims belong to the low income group and they are in need of help. It is believed that waqf can be utilized for the betterment of Indian Muslim community. Among the available uses of waqf assets, the main objective of this paper is to introduce waqf based takaful model in India. In addition, how this proposed model can be adopted in India is highlighted.Methods – Library research is applied since this paper relies on secondary data by thoroughlyreviewing the most relevant literature.Result – India as a rich country with waqf assets should fully utilize the resources to help the Muslims through takaful.Conclusion – In this study, we have proposed waqf based takaful model with the combination of the concepts mudarabah and wakalah for India. We recommend this model based on the background of the  country and situations. Since we have not tested the viability of this model in India, future research should be continued on this testing.Keywords : Wakaf, Takaful, Kemiskinan dan India

  12. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  13. Effect of geometry of rice kernels on drying modeling results

    Science.gov (United States)

    Geometry of rice grain is commonly represented by sphere, spheroid or ellipsoid shapes in the drying models. Models using simpler shapes are easy to solve mathematically, however, deviation from the true grain shape might lead to large errors in predictions of drying characteristics such as, moistur...

  14. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  15. An immune based dynamic intrusion detection model

    Institute of Scientific and Technical Information of China (English)

    LI Tao

    2005-01-01

    With the dynamic description method for self and antigen, and the concept of dynamic immune tolerance for lymphocytes in network-security domain presented in this paper, a new immune based dynamic intrusion detection model (Idid) is proposed. In Idid, the dynamic models and the corresponding recursive equations of the lifecycle of mature lymphocytes, and the immune memory are built. Therefore, the problem of the dynamic description of self and nonself in computer immune systems is solved, and the defect of the low efficiency of mature lymphocyte generating in traditional computer immune systems is overcome. Simulations of this model are performed, and the comparison experiment results show that the proposed dynamic intrusion detection model has a better adaptability than the traditional methods.

  16. Electrochemistry-based Battery Modeling for Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Kulkarni, Chetan Shrikant

    2013-01-01

    Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.

  17. Model based monitoring of stormwater runoff quality

    DEFF Research Database (Denmark)

    Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen

    2012-01-01

    the information obtained about MPs discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by volume-proportional and passive sampling in a storm drainage system in the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual......) for calibration of the model resulted in the same predicted level but narrower model prediction bounds than calibrations based on volume-proportional samples, allowing a better exploitation of the resources allocated for stormwater quality management.......Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combination of model with field sampling) affect...

  18. Results from modeling and simulation of chemical downstream etch systems

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, E.; Vosen, S.R.; Shon, J.W.; Larson, R.S.; Fox, C.A.; Buchenauer

    1996-05-01

    This report summarizes modeling work performed at Sandia in support of Chemical Downstream Etch (CDE) benchmark and tool development programs under a Cooperative Research and Development Agreement (CRADA) with SEMATECH. The Chemical Downstream Etch (CDE) Modeling Project supports SEMATECH Joint Development Projects (JDPs) with Matrix Integrated Systems, Applied Materials, and Astex Corporation in the development of new CDE reactors for wafer cleaning and stripping processes. These dry-etch reactors replace wet-etch steps in microelectronics fabrication, enabling compatibility with other process steps and reducing the use of hazardous chemicals. Models were developed at Sandia to simulate the gas flow, chemistry and transport in CDE reactors. These models address the essential components of the CDE system: a microwave source, a transport tube, a showerhead/gas inlet, and a downstream etch chamber. The models have been used in tandem to determine the evolution of reactive species throughout the system, and to make recommendations for process and tool optimization. A significant part of this task has been in the assembly of a reasonable set of chemical rate constants and species data necessary for successful use of the models. Often the kinetic parameters were uncertain or unknown. For this reason, a significant effort was placed on model validation to obtain industry confidence in the model predictions. Data for model validation were obtained from the Sandia Molecular Beam Mass Spectrometry (MBMS) experiments, from the literature, from the CDE Benchmark Project (also part of the Sandia/SEMATECH CRADA), and from the JDP partners. The validated models were used to evaluate process behavior as a function of microwave-source operating parameters, transport-tube geometry, system pressure, and downstream chamber geometry. In addition, quantitative correlations were developed between CDE tool performance and operation set points.

  19. Developing Empirically Based Models of Practice.

    Science.gov (United States)

    Blythe, Betty J.; Briar, Scott

    1985-01-01

    Over the last decade emphasis has shifted from theoretically based models of practice to empirically based models whose elements are derived from clinical research. These models are defined and a developing model of practice through the use of single-case methodology is examined. Potential impediments to this new role are identified. (Author/BL)

  20. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  1. Study of chaos based on a hierarchical model

    Energy Technology Data Exchange (ETDEWEB)

    Yagi, Masatoshi; Itoh, Sanae-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    2001-12-01

    Study of chaos based on a hierarchical model is briefly reviewed. Here we categorize hierarchical model equations, i.e., (1) a model with a few degrees of freedom, e.g., the Lorenz model, (2) a model with intermediate degrees of freedom like a shell model, and (3) a model with many degrees of freedom such as a Navier-Stokes equation. We discuss the nature of chaos and turbulence described by these models via Lyapunov exponents. The interpretation of results observed in fundamental plasma experiments is also shown based on a shell model. (author)

  2. Model-Based Power Plant Master Control

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Katarina; Thomas, Jean; Funkquist, Jonas

    2010-08-15

    The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are

  3. Product Modelling for Model-Based Maintenance

    NARCIS (Netherlands)

    Houten, van F.J.A.M.; Tomiyama, T.; Salomons, O.W.

    1998-01-01

    The paper describes the fundamental concepts of maintenance and the role that information technology can play in the support of maintenance activities. Function-Behaviour-State modelling is used to describe faults and deterioration of mechanisms in terms of user perception and measurable quantities.

  4. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through suffi

  5. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  6. PARTICIPATION BASED MODEL OF SHIP CREW MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Toni Bielić

    2014-10-01

    Full Text Available 800x600 This paper analyse the participation - based model on board the ship as possibly optimal leadership model existing in the shipping industry with accent on decision - making process. In the paper authors have tried to define master’s behaviour model and management style identifying drawbacks and disadvantages of vertical, pyramidal organization with master on the top. Paper describes efficiency of decision making within team organization and optimization of a ship’s organisation by introducing teamwork on board the ship. Three examples of the ship’s accidents are studied and evaluated through “Leader - participation” model. The model of participation based management as a model of the teamwork has been applied in studying the cause - and - effect of accidents with the critical review of the communication and managing the human resources on a ship. The results have showed that the cause of all three accidents is the autocratic behaviour of the leaders and lack of communication within teams. Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4

  7. Initial experimental results of a machine learning-based temperature control system for an RF gun

    CERN Document Server

    Edelen, A L; Milton, S V; Chase, B E; Crawford, D J; Eddy, N; Edstrom, D; Harms, E R; Ruan, J; Santucci, J K; Stabile, P

    2015-01-01

    Colorado State University (CSU) and Fermi National Accelerator Laboratory (Fermilab) have been developing a control system to regulate the resonant frequency of an RF electron gun. As part of this effort, we present initial test results for a benchmark temperature controller that combines a machine learning-based model and a predictive control algorithm. This is part of an on-going effort to develop adaptive, machine learning-based tools specifically to address control challenges found in particle accelerator systems.

  8. Box photosynthesis modeling results for WRF/CMAQ LSM

    Data.gov (United States)

    U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....

  9. Review of the dWind Model Conceptual Results

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Ian; Gleason, Michael; Preus, Robert; Sigrin, Ben

    2015-09-16

    This presentation provides an overview of the dWind model, including its purpose, background, and current status. Baring-Gould presented this material as part of the September 2015 WINDExchange webinar.

  10. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...

  11. Measurement-based load modeling: Theory and application

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.

  12. Return of feature-based cost modeling

    Science.gov (United States)

    Creese, Robert C.; Patrawala, Taher B.

    1998-10-01

    Feature Based Cost Modeling is thought of as a relative new approach to cost modeling, but feature based cost modeling had considerable development in the 1950's. Considerable work was published in the 1950's by Boeing on cost for various casting processes--sand casting, die casting, investment casting and permanent mold casting--as a function of a single casting feature, casting volume. Additional approaches to feature based cost modeling have been made, and this work is a review of previous works and a proposed integrated model to feature based cost modeling.

  13. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    Science.gov (United States)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the

  14. gis-based hydrological model based hydrological model upstream ...

    African Journals Online (AJOL)

    eobe

    Metrological Agency (NIMET) and Jebba Hydroelectric ... cycle by SWAT is based on the water balance equation: = + (. − ... The estimation of the base flow is done using Equation. 5. = . ( ..... Acetic Acid”, Nigerian Journal of Tecnology, Vol. 32,.

  15. The Animal Model Determines the Results of Aeromonas Virulence Factors

    Science.gov (United States)

    Romero, Alejandro; Saraceni, Paolo R.; Merino, Susana; Figueras, Antonio; Tomás, Juan M.; Novoa, Beatriz

    2016-01-01

    The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of Aeromonas hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analyzed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1ΔvapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild-type. Variations between models were evidenced using the AH-1ΔrmlB, which lacks the O-antigen lipopolysaccharide (LPS), and the AH-1ΔwahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim) and the AH-1::motX (non-motile but producing flagella). They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study demonstrates

  16. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  17. On Process Modelling Using Physical Oriented And Phenomena Based Principles

    Directory of Open Access Journals (Sweden)

    Mihai Culea

    2000-12-01

    Full Text Available This work presents a modelling framework based on phenomena description of the process. The approach is taken to easy understand and construct process model in heterogeneous possible distributed modelling and simulation environments. A simplified case study of a heat exchanger is considered and Modelica modelling language to check the proposed concept. The partial results are promising and the research effort will be extended in a computer aided modelling environment based on phenomena.

  18. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    Energy Technology Data Exchange (ETDEWEB)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain [CEA, LIST, 91191Gif-sur-Yvette (France); Lonne, Sébastien [EXTENDE, Le Bergson, 15 Avenue Emile Baudot, 91300 MASSY (France)

    2014-02-18

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  19. Satellite-based terrestrial production efficiency modeling

    Directory of Open Access Journals (Sweden)

    Obersteiner Michael

    2009-09-01

    Full Text Available Abstract Production efficiency models (PEMs are based on the theory of light use efficiency (LUE which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP monitoring. The objectives of this review are as follows: 1 to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS identified in the literature; 2 to review each model to determine potential improvements to the general PEM methodology; 3 to review the related literature on satellite-based gross primary productivity (GPP and NPP modeling for additional possibilities for improvement; and 4 based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra; there is an urgent need for

  20. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for interpretatio

  1. Preliminary results of a three-dimensional radiative transfer model

    Energy Technology Data Exchange (ETDEWEB)

    O`Hirok, W. [Univ. of California, Santa Barbara, CA (United States)

    1995-09-01

    Clouds act as the primary modulator of the Earth`s radiation at the top of the atmosphere, within the atmospheric column, and at the Earth`s surface. They interact with both shortwave and longwave radiation, but it is primarily in the case of shortwave where most of the uncertainty lies because of the difficulties in treating scattered solar radiation. To understand cloud-radiative interactions, radiative transfer models portray clouds as plane-parallel homogeneous entities to ease the computational physics. Unfortunately, clouds are far from being homogeneous, and large differences between measurement and theory point to a stronger need to understand and model cloud macrophysical properties. In an attempt to better comprehend the role of cloud morphology on the 3-dimensional radiation field, a Monte Carlo model has been developed. This model can simulate broadband shortwave radiation fluxes while incorporating all of the major atmospheric constituents. The model is used to investigate the cloud absorption anomaly where cloud absorption measurements exceed theoretical estimates and to examine the efficacy of ERBE measurements and cloud field experiments. 3 figs.

  2. A new procedure to built a model covariance matrix: first results

    Science.gov (United States)

    Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.

    2012-04-01

    In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special

  3. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  4. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...... is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ¿(n) of y(t) on y(t-1), is close to one. We show that the estimator of ¿(n) converges to ¿p...

  5. Transmission resonance Raman spectroscopy: experimental results versus theoretical model calculations.

    Science.gov (United States)

    Gonzálvez, Alicia G; González Ureña, Ángel

    2012-10-01

    A laser spectroscopic technique is described that combines transmission and resonance-enhanced Raman inelastic scattering together with low laser power (view, a model for the Raman signal dependence on the sample thickness is also presented. Essentially, the model considers the sample to be homogeneous and describes the underlying physics using only three parameters: the Raman cross-section, the laser-radiation attenuation cross-section, and the Raman signal attenuation cross-section. The model was applied successfully to describe the sample-size dependence of the Raman signal in both β-carotene standards and carrot roots. The present technique could be useful for direct, fast, and nondestructive investigations in food quality control and analytical or physiological studies of animal and human tissues.

  6. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  7. Results-based management - Developing one′s key results areas (KRAs

    Directory of Open Access Journals (Sweden)

    Om Prakash Kansal

    2015-01-01

    Full Text Available In spite of aspiring to be a good manager, we public health experts fail to evaluate ourselves against our personal and professional goals. The Key Result Areas (KRAs or key performance indicators (KPIs help us in setting our operational (day-to-day and/or strategic (long-term goals followed by grading ourselves at different times of our careers. These shall help in assessing our strengths and weaknesses. The weakest KRA should set the maximum extent to which one should use his/her skills and abilities to have the greatest impact on his/her career.

  8. Extensions in model-based system analysis

    OpenAIRE

    Graham, Matthew R.

    2007-01-01

    Model-based system analysis techniques provide a means for determining desired system performance prior to actual implementation. In addition to specifying desired performance, model-based analysis techniques require mathematical descriptions that characterize relevant behavior of the system. The developments of this dissertation give ex. tended formulations for control- relevant model estimation as well as model-based analysis conditions for performance requirements specified as frequency do...

  9. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  10. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  11. In Silico Model for Developmental Toxicity: How to Use QSAR Models and Interpret Their Results.

    Science.gov (United States)

    Marzo, Marco; Roncaglioni, Alessandra; Kulkarni, Sunil; Barton-Maclaren, Tara S; Benfenati, Emilio

    2016-01-01

    Modeling developmental toxicity has been a challenge for (Q)SAR model developers due to the complexity of the endpoint. Recently, some new in silico methods have been developed introducing the possibility to evaluate the integration of existing methods by taking advantage of various modeling perspectives. It is important that the model user is aware of the underlying basis of the different models in general, as well as the considerations and assumptions relative to the specific predictions that are obtained from these different models for the same chemical. The evaluation on the predictions needs to be done on a case-by-case basis, checking the analogs (possibly using structural, physicochemical, and toxicological information); for this purpose, the assessment of the applicability domain of the models provides further confidence in the model prediction. In this chapter, we present some examples illustrating an approach to combine human-based rules and statistical methods to support the prediction of developmental toxicity; we also discuss assumptions and uncertainties of the methodology.

  12. A Nuclear Interaction Model for Understanding Results of Single Event Testing with High Energy Protons

    Science.gov (United States)

    Culpepper, William X.; ONeill, Pat; Nicholson, Leonard L.

    2000-01-01

    An internuclear cascade and evaporation model has been adapted to estimate the LET spectrum generated during testing with 200 MeV protons. The model-generated heavy ion LET spectrum is compared to the heavy ion LET spectrum seen on orbit. This comparison is the basis for predicting single event failure rates from heavy ions using results from a single proton test. Of equal importance, this spectra comparison also establishes an estimate of the risk of encountering a failure mode on orbit that was not detected during proton testing. Verification of the general results of the model is presented based on experiments, individual part test results, and flight data. Acceptance of this model and its estimate of remaining risk opens the hardware verification philosophy to the consideration of radiation testing with high energy protons at the board and box level instead of the more standard method of individual part testing with low energy heavy ions.

  13. Image-based modelling of organogenesis.

    Science.gov (United States)

    Iber, Dagmar; Karimaddini, Zahra; Ünal, Erkan

    2016-07-01

    One of the major challenges in biology concerns the integration of data across length and time scales into a consistent framework: how do macroscopic properties and functionalities arise from the molecular regulatory networks-and how can they change as a result of mutations? Morphogenesis provides an excellent model system to study how simple molecular networks robustly control complex processes on the macroscopic scale despite molecular noise, and how important functional variants can emerge from small genetic changes. Recent advancements in three-dimensional imaging technologies, computer algorithms and computer power now allow us to develop and analyse increasingly realistic models of biological control. Here, we present our pipeline for image-based modelling that includes the segmentation of images, the determination of displacement fields and the solution of systems of partial differential equations on the growing, embryonic domains. The development of suitable mathematical models, the data-based inference of parameter sets and the evaluation of competing models are still challenging, and current approaches are discussed.

  14. A computer model to forecast wetland vegetation changes resulting from restoration and protection in coastal Louisiana

    Science.gov (United States)

    Visser, Jenneke M.; Duke-Sylvester, Scott M.; Carter, Jacoby; Broussard, Whitney P.

    2013-01-01

    The coastal wetlands of Louisiana are a unique ecosystem that supports a diversity of wildlife as well as a diverse community of commercial interests of both local and national importance. The state of Louisiana has established a 5-year cycle of scientific investigation to provide up-to-date information to guide future legislation and regulation aimed at preserving this critical ecosystem. Here we report on a model that projects changes in plant community distribution and composition in response to environmental conditions. This model is linked to a suite of other models and requires input from those that simulate the hydrology and morphology of coastal Louisiana. Collectively, these models are used to assess how alternative management plans may affect the wetland ecosystem through explicit spatial modeling of the physical and biological processes affected by proposed modifications to the ecosystem. We have also taken the opportunity to advance the state-of-the-art in wetland plant community modeling by using a model that is more species-based in its description of plant communities instead of one based on aggregated community types such as brackish marsh and saline marsh. The resulting model provides an increased level of ecological detail about how wetland communities are expected to respond. In addition, the output from this model provides critical inputs for estimating the effects of management on higher trophic level species though a more complete description of the shifts in habitat.

  15. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2000-07-01

    A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and

  16. Modeling of Na airglow emission and first results on the nocturnal variation at midlatitude

    Science.gov (United States)

    Bag, T.; Sunil Krishna, M. V.; Singh, Vir

    2015-12-01

    A model for sodium airglow emission is developed by incorporating all the known reaction mechanisms. The neutral, ionic, and photochemical mechanisms are successfully implemented into this model. The values of reaction rate coefficients are based upon the theoretical calculations as well as from experimental observations. The densities of major species are calculated using the continuity equations, whereas for the minor, intermediating, and short-lived species steady state approximation method is used. The modeled results are validated with the rocket, lidar, and photometer observations for a branching ratio of 0.04 for the production of Na(2P) in the reaction NaO + O → Na(2P, 2S). The inputs have been obtained from other physics-based models and ground- and satellite-based observations to give the combined volume emission rate (VER) of Na airglow between 80 and 110 km altitude. In the present study, the model is used to understand the nocturnal variation of Na VER during the solstice conditions. The model results suggest a variation of peak emission layer between 85 and 90 km during summer solstice condition, indicating a lower value of peak emission rate during summer solstice. The emission rates bear a strong correlation with the O3 density during summer solstice, whereas the magnitude of VER follows the Na density during winter solstice. The altitude of peak VER shows an upward shift of 5 km during winter solstice.

  17. MCP-based detector some results and perspectives

    CERN Document Server

    Patarakin, O O; Strepetov, A N; Turbin, E V; Sinitsin, V I; Kartamushev, A A

    1997-01-01

    The timing resolution of photomultiplier tubes (PMT) based on shevron-type microchannel plates (MCP) has been studied inmagnetic fields. The same timingresolution with and without a longitudinal magnetic field up to 2.0 kGwas obtained as = 85 ± 2 ps. It is shown that an increase of timing resolution in this magnetic field does not exceed25 ps (upper limit). The timing resolution of = 31 ± 2 pswas obtained for narrow (10resolution) amplitude spectrumfrom Corone discharge. The counting rate of MCP-based detector was studied in function of the direction of the magnetic field.The spatial and timing resolution for the MCP-based PMT were obtained using laser pulses as well. With laser pulses of 0.3 ns a timing resolution of ≅ 450 ps was obtained. Taking into account the amplitude correction narrows to 140 ps. Using 100 fs-laser with the standard constant fractiondiscriminator gives a timing resolution from 20 to 40 ps depending on the read-out MCP region.The perspectives of using...

  18. Model On DROID Response With Imperfect Trapping Tested On Experimental Results

    Science.gov (United States)

    Hijmering, R. A.; Kozorezov, A. G.; Verhoeve, P.; Martin, D. D. E.; Wigmore, J. K.; Venn, R.

    2009-12-01

    The DROID (Distributed Read-Out Imaging Detector) is being developed to overcome the limitation in sensitive area with the use of single STJ's (Superconducting Tunnel Junctions). The DROID configuration allows the reconstruction of the position of the photon absorption and therefore it can replace a number of single STJ's in a detector array. We present a 2D model which describes the response of DROIDs with partial trapping in the STJs. The model describes diffusion of quasiparticles (qps) and imperfect confinement via exchange of qps between the absorber and STJ. It incorporates possible diffusion mismatch between absorber and STJ, possible asymmetry between the STJs as well as between the base and top electrodes of the STJs, and photon absorption in the absorber or base or top film of the STJ. Dedicated experiments have been conducted to test the different aspects of the model. We find a good agreement between the model and experimental results.

  19. Realistic face modeling based on multiple deformations

    Institute of Scientific and Technical Information of China (English)

    GONG Xun; WANG Guo-yin

    2007-01-01

    On the basis of the assumption that the human face belongs to a linear class, a multiple-deformation model is proposed to recover face shape from a few points on a single 2D image. Compared to the conventional methods, this study has the following advantages. First, the proposed modified 3D sparse deforming model is a noniterative approach that can compute global translation efficiently and accurately. Subsequently, the overfitting problem can be alleviated based on the proposed multiple deformation model. Finally, by keeping the main features, the texture generated is realistic. The comparison results show that this novel method outperforms the existing methods by using ground truth data and that realistic 3D faces can be recovered efficiently from a single photograph.

  20. A Dissipative Model for Hydrogen Storage: Existence and Regularity Results

    CERN Document Server

    Chiodaroli, Elisabetta

    2010-01-01

    We prove global existence of a solution to an initial and boundary value problem for a highly nonlinear PDE system. The problem arises from a termomechanical dissipative model describing hydrogen storage by use of metal hydrides. In order to treat the model from an analytical point of view, we formulate it as a phase transition phenomenon thanks to the introduction of a suitable phase variable. Continuum mechanics laws lead to an evolutionary problem involving three state variables: the temperature, the phase parameter and the pressure. The problem thus consists of three coupled partial differential equations combined with initial and boundary conditions. Existence and regularity of the solutions are here investigated by means of a time discretization-a priori estimate-passage to the limit procedure joined with compactness and monotonicity arguments.

  1. Recent results in the NJL model with heavy quarks

    CERN Document Server

    Feldmann, T

    1996-01-01

    We investigate the interplay of chiral and heavy quark symmetries by using the NJL quark model. Heavy quarks with finite masses m(Q) as well as the limit m(Q) to infinity are studied. We found large corrections to the heavy mass scaling law for the pseudoscalar decay constant. The influence of external momenta on the shape parameters of the Isgur-Wise form factor is discussed.

  2. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    Directory of Open Access Journals (Sweden)

    Joachimiak Damian

    2015-06-01

    Full Text Available Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  3. Electrical Compact Modeling of Graphene Base Transistors

    Directory of Open Access Journals (Sweden)

    Sébastien Frégonèse

    2015-11-01

    Full Text Available Following the recent development of the Graphene Base Transistor (GBT, a new electrical compact model for GBT devices is proposed. The transistor model includes the quantum capacitance model to obtain a self-consistent base potential. It also uses a versatile transfer current equation to be compatible with the different possible GBT configurations and it account for high injection conditions thanks to a transit time based charge model. Finally, the developed large signal model has been implemented in Verilog-A code and can be used for simulation in a standard circuit design environment such as Cadence or ADS. This model has been verified using advanced numerical simulation.

  4. Blade element momentum modeling of inflow with shear in comparison with advanced model results

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Riziotis, V.; Zahle, Frederik

    2012-01-01

    There seems to be a significant uncertainty in aerodynamic and aeroelastic simulations on megawatt turbines operating in inflow with considerable shear, in particular with the engineering blade element momentum (BEM) model, commonly implemented in the aeroelastic design codes used by industry....... Computations with advanced vortex and computational fluid dynamics models are used to provide improved insight into the complex flow phenomena and rotor aerodynamics caused by the sheared inflow. One consistent result from the advanced models is the variation of induced velocity as a function of azimuth when...... a higher power than in uniform flow. On the basis of the consistent azimuthal induction variations seen in the advanced model results, three different BEM implementation methods are discussed and tested in the same aeroelastic code. A full local BEM implementation on an elemental stream tube in both...

  5. Hierarchical Geometric Constraint Model for Parametric Feature Based Modeling

    Institute of Scientific and Technical Information of China (English)

    高曙明; 彭群生

    1997-01-01

    A new geometric constraint model is described,which is hierarchical and suitable for parametric feature based modeling.In this model,different levels of geometric information are repesented to support various stages of a design process.An efficient approach to parametric feature based modeling is also presented,adopting the high level geometric constraint model.The low level geometric model such as B-reps can be derived automatically from the hig level geometric constraint model,enabling designers to perform their task of detailed design.

  6. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approach include instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  7. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    Directory of Open Access Journals (Sweden)

    K.S. Kuppusamy,

    2011-03-01

    Full Text Available The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines inorder to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web pagesegmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approachinclude instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  8. Maximizing esthetic results on zirconia-based restorations.

    Science.gov (United States)

    Chang, Yi-Yuan

    2011-01-01

    With a flexural strength of approximately 900-1,100 MPa, zirconium oxide is one of the toughest all-ceramic materials available in dentistry.1 It can be used to fabricate both single-unit and long-span bridge frameworks. A moderate level of translucency makes it suitable for esthetically demanding clinical cases, such as restoring maxillary anterior teeth. A variety of well-designed porcelain veneering systems allow technicians to apply their artistic skills to create natural, lifelike restorations. A good balance of strength, precision, and translucency allows zirconia-based restorations to accommodate a variety of clinical situations.

  9. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  10. The Culture Based Model: Constructing a Model of Culture

    Science.gov (United States)

    Young, Patricia A.

    2008-01-01

    Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…

  11. Agent based modeling in tactical wargaming

    Science.gov (United States)

    James, Alex; Hanratty, Timothy P.; Tuttle, Daniel C.; Coles, John B.

    2016-05-01

    Army staffs at division, brigade, and battalion levels often plan for contingency operations. As such, analysts consider the impact and potential consequences of actions taken. The Army Military Decision-Making Process (MDMP) dictates identification and evaluation of possible enemy courses of action; however, non-state actors often do not exhibit the same level and consistency of planned actions that the MDMP was originally designed to anticipate. The fourth MDMP step is a particular challenge, wargaming courses of action within the context of complex social-cultural behaviors. Agent-based Modeling (ABM) and its resulting emergent behavior is a potential solution to model terrain in terms of the human domain and improve the results and rigor of the traditional wargaming process.

  12. Trip Generation Model Based on Destination Attractiveness

    Institute of Scientific and Technical Information of China (English)

    YAO Liya; GUAN Hongzhi; YAN Hai

    2008-01-01

    Traditional trip generation forecasting methods use unified average trip generation rates to determine trip generation volumes in various traffic zones without considering the individual characteristics of each traffic zone.Therefore,the results can have significant errors.To reduce the forecasting error produced by uniform trip generation rates for different traffic zones,the behavior of each traveler was studied instead of the characteristics of the traffic zone.This paper gives a method for calculating the trip efficiency and the effect of traffic zones combined with a destination selection model based on disaggregate theory for trip generation.Beijing data is used with the trip generation method to predict trip volumes.The results show that the disaggregate model in this paper is more accurate than the traditional method.An analysis of the factors influencing traveler behavior and destination selection shows that the attractiveness of the traffic zone strongly affects the trip generation volume.

  13. Exact results in modeling planetary atmospheres-III

    Energy Technology Data Exchange (ETDEWEB)

    Pelkowski, J. [Institut fuer Atmosphaere und Umwelt, J.W. Goethe Universitaet Frankfurt, Campus Riedberg, Altenhoferallee 1, D-60438 Frankfurt a.M. (Germany)], E-mail: Pelkowski@meteor.uni-frankfurt.de; Chevallier, L. [Observatoire de Paris-Meudon, Laboratoire LUTH, 5 Place Jules Janssen, 92195 Meudon cedex (France); Rutily, B. [Universite de Lyon, F-69003 Lyon (France); Universite Lyon 1, Observatoire de Lyon, 9 avenue Charles Andre, F-69230 Saint-Genis-Laval (France); CNRS, UMR 5574, Centre de Recherche Astrophysique de Lyon (France); Ecole Normale Superieure de Lyon, F-69007 Lyon (France); Titaud, O. [Centro de Modelamiento Matematico, UMI 2807 CNRS-UChile, Blanco Encalada 2120 - 7 Piso, Casilla 170 - Correo 3, Santiago (Chile)

    2008-01-15

    We apply the semi-gray model of our previous paper to the particular case of the Earth's atmosphere, in order to illustrate quantitatively the inverse problem associated with the direct problem we dealt with before. From given climatological values of the atmosphere's spherical albedo and transmittance for visible radiation, the single-scattering albedo and the optical thickness in the visible are inferred, while the infrared optical thickness is deduced for given global average surface temperature. Eventually, temperature distributions in terms of the infrared optical depth will be shown for a terrestrial atmosphere assumed to be semi-gray and, locally, in radiative and thermodynamic equilibrium.

  14. Exact results in modeling planetary atmospheres-I. Gray atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Chevallier, L. [Observatoire de Paris-Meudon, Laboratoire LUTH, 5 Place Jules Janssen, 92195 Meudon cedex (France)]. E-mail: loic.chevallier@obspm.fr; Pelkowski, J. [Institut fuer Meteorologie und Geophysik, J.W. Goethe Universitaet Frankfurt, Robert Mayer Strasse 1, D-60325 Frankfurt (Germany); Rutily, B. [Universite de Lyon, Lyon, F-69000 (France) and Universite Lyon 1, Villeurbanne, F-69622 (France) and Centre de Recherche Astronomique de Lyon, Observatoire de Lyon, 9 avenue Charles Andre, Saint-Genis Laval cedex, F-69561 (France) and CNRS, UMR 5574; Ecole Normale Superieure de Lyon, Lyon (France)

    2007-04-15

    An exact model is proposed for a gray, isotropically scattering planetary atmosphere in radiative equilibrium. The slab is illuminated on one side by a collimated beam and is bounded on the other side by an emitting and partially reflecting ground. We provide expressions for the incident and reflected fluxes on both boundary surfaces, as well as the temperature of the ground and the temperature distribution in the atmosphere, assuming the latter to be in local thermodynamic equilibrium. Tables and curves of the temperature distribution are included for various values of the optical thickness. Finally, semi-infinite atmospheres illuminated from the outside or by sources at infinity is dealt with.

  15. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  16. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    OpenAIRE

    K.S. Kuppusamy,; Aghila, G.

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the can...

  17. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  18. Integrated Semantic Similarity Model Based on Ontology

    Institute of Scientific and Technical Information of China (English)

    LIU Ya-Jun; ZHAO Yun

    2004-01-01

    To solve the problem of the inadequacy of semantic processing in the intelligent question answering system, an integrated semantic similarity model which calculates the semantic similarity using the geometric distance and information content is presented in this paper.With the help of interrelationship between concepts, the information content of concepts and the strength of the edges in the ontology network, we can calculate the semantic similarity between two concepts and provide information for the further calculation of the semantic similarity between user's question and answers in knowlegdge base.The results of the experiments on the prototype have shown that the semantic problem in natural language processing can also be solved with the help of the knowledge and the abundant semantic information in ontology.More than 90% accuracy with less than 50 ms average searching time in the intelligent question answering prototype system based on ontology has been reached.The result is very satisfied.

  19. Testing Strategies for Model-Based Development

    Science.gov (United States)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  20. Large Deviation Results for Generalized Compound Negative Binomial Risk Models

    Institute of Scientific and Technical Information of China (English)

    Fan-chao Kong; Chen Shen

    2009-01-01

    In this paper we extend and improve some results of the large deviation for random sums of random variables.Let {Xn;n≥1} be a sequence of non-negative,independent and identically distributed random variables with common heavy-tailed distribution function F and finite mean μ∈R+,{N(n);n≥0} be a sequence of negative binomial distributed random variables with a parameter p ∈(0,1),n≥0,let {M(n);n≥0} be a Poisson process with intensity λ0.Suppose {N(n);n≥0},{Xn;n≥1} and {M(n);n≥0} are mutually results.These results can be applied to certain problems in insurance and finance.

  1. Néron models and base change

    CERN Document Server

    Halle, Lars Halvard

    2016-01-01

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions of abelian varieties. The final chapter contains a list of challenging open questions. This book is a...

  2. Entropy-based consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  3. Model-based target and background characterization

    Science.gov (United States)

    Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert

    2000-07-01

    Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.

  4. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  5. Model-Based Enterprise Summit Report

    Science.gov (United States)

    2014-02-01

    Models Become Much More Efficient and Effective When Coupled With Knowledge Design Advisors CAD Fit Machine Motion KanBan Trigger Models Tolerance...Based Enterprise Geometry Kinematics Design Advisors Control Physics Planning System Models CAD Fit Machine Motion KanBan Trigger Models Tolerance

  6. Theoretical Modeling of ISO Results on Planetary Nebula NGC 7027

    Science.gov (United States)

    Yan, M.; Federman, S. R.; Dalgarno, A.; Bjorkman, J. E.

    1999-04-01

    We present a thermal and chemical model of the neutral envelope of planetary nebula NGC 7027. In our model, the neutral envelope is composed of a thin dense shell of constant density and an outer stellar wind region with the usual inverse-square law density profile. The thermal and chemical structure is calculated with the assumption that the incident radiation field on the inner surface equals 0.5×105 times Draine's fit to the average interstellar far-ultraviolet field. The rate coefficient for H2 formation on grains is assumed to be 1/5 the usual value to take into account the lower dust-gas mass ratio in the neutral envelope of NGC 7027. The calculated temperature in the dense shell decreases from 3000 to under 200 K. Once the temperature drops to 200 K, we assume that it remains at 200 K until the outer edge of the dense shell is reached, so that the observed intensities of CO J=16-15, 15-14, and 14-13 lines can be reproduced. The 200 K temperature can be interpreted as the average temperature of the shocked gas just behind the forward shock front in the framework of the interacting stellar wind theory. We calculate the intensities of the molecular far-infrared rotational lines by using a revised version of the escape probability formalism. The theoretical intensities for rotational lines of CO (from J=29-28 to J=14-13), CH+, OH, and CH are shown to be in good agreement with ISO observations. The H2 rovibrational line intensities are also calculated and are in agreement with available observations.

  7. Construction of an extended library of adult male 3D models: rationale and results

    Science.gov (United States)

    Broggio, D.; Beurrier, J.; Bremaud, M.; Desbrée, A.; Farah, J.; Huet, C.; Franck, D.

    2011-12-01

    In order to best cover the possible extent of heights and weights of male adults the construction of 25 whole body 3D models has been undertaken. Such a library is thought to be useful to specify the uncertainties and relevance of dosimetry calculations carried out with models representing individuals of average body heights and weights. Representative 3D models of Caucasian body types are selected in a commercial database according to their height and weight, and 3D models of the skeleton and internal organs are designed using another commercial dataset. A review of the literature enabled one to fix volume or mass target values for the skeleton, soft organs, skin and fat content of the selected individuals. The composition of the remainder tissue is fixed so that the weight of the voxel models equals the weight of the selected individuals. After mesh and NURBS modelling, volume adjustment of the selected body shapes and additional voxel-based work, 25 voxel models with 109 identified organs or tissue are obtained. Radiation transport calculations are carried out with some of the developed models to illustrate potential uses. The following points are discussed throughout this paper: justification of the fixed or obtained models' features regarding available and relevant literature data; workflow and strategy for major modelling steps; advantages and drawbacks of the obtained library as compared with other works. The construction hypotheses are explained and justified in detail since future calculation results obtained with this library will depend on them.

  8. Model based risk assessment - the CORAS framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune; Thunem, Atoosa P-J.

    2004-04-15

    Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

  9. Combining forming results via weld models to powerful numerical assemblies

    NARCIS (Netherlands)

    Kose, K.; Rietman, Bert

    2004-01-01

    Forming simulations generally give satisfying results with respect to thinning, stresses, changed material properties and, with a proper springback calculation, the geometric form. The joining of parts by means of welding yields an extra change of the material properties and the residual stresses.

  10. Combining forming results via weld models to powerful numerical assemblies

    NARCIS (Netherlands)

    Kose, K.; Rietman, B.

    2004-01-01

    Forming simulations generally give satisfying results with respect to thinning, stresses, changed material properties and, with a proper springback calculation, the geometric form. The joining of parts by means of welding yields an extra change of the material properties and the residual stresses. W

  11. Student Entrepreneurship in Hungary: Selected Results Based on GUESSS Survey

    Directory of Open Access Journals (Sweden)

    Andrea S. Gubik

    2016-12-01

    Full Text Available Objective: This study investigates students’ entrepreneurial activities and aims to answer questions regarding to what extent do students utilize the knowledge gained during their studies and the personal connections acquired at universities, as well as what role a family business background plays in the development of students’ business start-ups. Research Design & Methods: This paper is based on the database of the GUESSS project investigates 658 student entrepreneurs (so-called ‘active entrepreneurs’ who have already established businesses of their own. Findings: The rate of self-employment among Hungarian students who study in tertiary education and consider themselves to be entrepreneurs is high. Motivations and entrepreneurial efforts differ from those who owns a larger company, they do not necessarily intend to make an entrepreneurial path a career option in the long run. A family business background and family support play a determining role in entrepreneurship and business start-ups, while entrepreneurial training and courses offered at higher institutions are not reflected in students’ entrepreneurial activities. Implications & Recommendations: Universities should offer not only conventional business courses (for example, business planning, but also new forms of education so that students meet various entrepreneurial tasks and problems, make decisions in different situations, explore and acquaint themselves with entrepreneurship. Contribution & Value Added: The study provides literature overview of youth entrepreneurship, describes the main characteristics of students’ enterprises and contributes to understanding the factors of youth entrepreneurship.

  12. Dynamic modelling and analysis of biochemical networks: mechanism-based models and model-based experiments.

    Science.gov (United States)

    van Riel, Natal A W

    2006-12-01

    Systems biology applies quantitative, mechanistic modelling to study genetic networks, signal transduction pathways and metabolic networks. Mathematical models of biochemical networks can look very different. An important reason is that the purpose and application of a model are essential for the selection of the best mathematical framework. Fundamental aspects of selecting an appropriate modelling framework and a strategy for model building are discussed. Concepts and methods from system and control theory provide a sound basis for the further development of improved and dedicated computational tools for systems biology. Identification of the network components and rate constants that are most critical to the output behaviour of the system is one of the major problems raised in systems biology. Current approaches and methods of parameter sensitivity analysis and parameter estimation are reviewed. It is shown how these methods can be applied in the design of model-based experiments which iteratively yield models that are decreasingly wrong and increasingly gain predictive power.

  13. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Holmes, Aimee E.; Heredia-Langner, Alejandro

    2013-09-18

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  14. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Holmes, Aimee E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lee, Kearn P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelly, Steven E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-01-01

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  15. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  16. Ionospheric Poynting Flux and Joule Heating Modeling Challenge: Latest Results and New Models.

    Science.gov (United States)

    Shim, J. S.; Rastaetter, L.; Kuznetsova, M. M.; Knipp, D. J.; Zheng, Y.; Cosgrove, R. B.; Newell, P. T.; Weimer, D. R.; Fuller-Rowell, T. J.; Wang, W.

    2014-12-01

    Poynting Flux and Joule Heating in the ionosphere - latest results from the challenge and updates at the CCMC. With the addition of satellite tracking and display features in the online analysis tool and at the Community Coordinated Modeling Center (CCMC), we are now able to obtain Poynting flux and Joule heating values from a wide variety of ionospheric models. In addition to Poynting fluxes derived from electric and magnetic field measurements from the Defense Meteorological Satellite Program (DMSP) satellites for a recent modeling challenge, we can now use a Poynting Flux model derived from FAST satellite observations for comparison. Poynting Fluxes are also correlated using Ovation Prime maps of precipitation patterns during the same time periods to assess how "typical" the events in the challenge are.

  17. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  18. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  19. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  20. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  1. Environmental Model Interoperability Enabled by Open Geospatial Standards - Results of a Feasibility Study (Invited)

    Science.gov (United States)

    Benedict, K. K.; Yang, C.; Huang, Q.

    2010-12-01

    The availability of high-speed research networks such as the US National Lambda Rail and the GÉANT network, scalable on-demand commodity computing resources provided by public and private "cloud" computing systems, and increasing demand for rapid access to the products of environmental models for both research and public policy development contribute to a growing need for the evaluation and development of environmental modeling systems that distribute processing, storage, and data delivery capabilities between network connected systems. In an effort to address the feasibility of developing a standards-based distributed modeling system in which model execution systems are physically separate from data storage and delivery systems, the research project presented in this paper developed a distributed dust forecasting system in which two nested atmospheric dust models are executed at George Mason University (GMU, in Fairfax, VA) while data and model output processing services are hosted at the University of New Mexico (UNM, in Albuquerque, NM). Exchange of model initialization and boundary condition parameters between the servers at UNM and the model execution systems at GMU is accomplished through Open Geospatial Consortium (OGC) Web Coverage Services (WCS) and Web Feature Services (WFS) while model outputs are pushed from GMU systems back to UNM using a REST web service interface. In addition to OGC and non-OGC web services for exchange between UNM and GMU, the servers at UNM also provide access to the input meteorological model products, intermediate and final dust model outputs, and other products derived from model outputs through OGC WCS, WFS, and OGC Web Map Services (WMS). The performance of the nested versus non-nested models is assessed in this research, with the results of the performance analysis providing the core content of the produced feasibility study. System integration diagram illustrating the storage and service platforms hosted at the Earth Data

  2. Modeling Framework and Results to Inform Charging Infrastructure Investments

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, Marc W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    The plug-in electric vehicle (PEV) market is experiencing rapid growth with dozens of battery electric (BEV) and plug-in hybrid electric (PHEV) models already available and billions of dollars being invested by automotive manufacturers in the PEV space. Electric range is increasing thanks to larger and more advanced batteries and significant infrastructure investments are being made to enable higher power fast charging. Costs are falling and PEVs are becoming more competitive with conventional vehicles. Moreover, new technologies such as connectivity and automation hold the promise of enhancing the value proposition of PEVs. This presentation outlines a suite of projects funded by the U.S. Department of Energy's Vehicle Technology Office to conduct assessments of the economic value and charging infrastructure requirements of the evolving PEV market. Individual assessments include national evaluations of PEV economic value (assuming 73M PEVs on the road in 2035), national analysis of charging infrastructure requirements (with community and corridor level resolution), and case studies of PEV ownership in Columbus, OH and Massachusetts.

  3. Model-based explanation of plant knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Huuskonen, P.J. [VTT Electronics, Oulu (Finland). Embedded Software

    1997-12-31

    This thesis deals with computer explanation of knowledge related to design and operation of industrial plants. The needs for explanation are motivated through case studies and literature reviews. A general framework for analysing plant explanations is presented. Prototypes demonstrate key mechanisms for implementing parts of the framework. Power plants, steel mills, paper factories, and high energy physics control systems are studied to set requirements for explanation. The main problems are seen to be either lack or abundance of information. Design knowledge in particular is found missing at plants. Support systems and automation should be enhanced with ways to explain plant knowledge to the plant staff. A framework is formulated for analysing explanations of plant knowledge. It consists of three parts: 1. a typology of explanation, organised by the class of knowledge (factual, functional, or strategic) and by the target of explanation (processes, automation, or support systems), 2. an identification of explanation tasks generic for the plant domain, and 3. an identification of essential model types for explanation (structural, behavioural, functional, and teleological). The tasks use the models to create the explanations of the given classes. Key mechanisms are discussed to implement the generic explanation tasks. Knowledge representations based on objects and their relations form a vocabulary to model and present plant knowledge. A particular class of models, means-end models, are used to explain plant knowledge. Explanations are generated through searches in the models. Hypertext is adopted to communicate explanations over dialogue based on context. The results are demonstrated in prototypes. The VICE prototype explains the reasoning of an expert system for diagnosis of rotating machines at power plants. The Justifier prototype explains design knowledge obtained from an object-oriented plant design tool. Enhanced access mechanisms into on-line documentation are

  4. Numerical simulation of base flow with hot base bleed for two jet models

    OpenAIRE

    Wen-jie Yu; Yong-gang Yu; Bin Ni

    2014-01-01

    In order to improve the benefits of base bleed in base flow field, the base flow with hot base bleed for two jet models is studied. Two-dimensional axisymmetric Navier–Stokes equations are computed by using a finite volume scheme. The base flow of a cylinder afterbody with base bleed is simulated. The simulation results are validated with the experimental data, and the experimental results are well reproduced. On this basis, the base flow fields with base bleed for a circular jet model and an...

  5. CSPBuilder - CSP based Scientific Workflow Modelling

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Vinter, Brian

    2008-01-01

    This paper introduces a framework for building CSP based applications, targeted for clusters and next generation CPU designs. CPUs are produced with several cores today and every future CPU generation will feature increasingly more cores, resulting in a requirement for concurrency that has...... not previously been called for. The framework is CSP presented as a scienti¿c work¿ow model, specialized for scienti¿c computing applications. The purpose of the framework is to enable scientists to exploit large parallel computation resources, which has previously been hard due of the dif¿culty of concurrent...... programming using threads and locks....

  6. Standard Model Higgs results from ATLAS and CMS experiments

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221190; The ATLAS collaboration

    2016-01-01

    The properties of the Higgs boson particle were measured with the ATLAS and CMS experiments at the LHC at the centre-of-mass energies 7 TeV and 8 TeV. The combined data samples of the ATLAS and CMS experiments were used for the measurements of the Higgs boson mass and couplings. Furthermore, the CP and spin analysis done separately with the CMS and ATLAS experiments are described. Moreover, first results of the Higgs boson cross section at the centre-of-mass energy 13 TeV in the channels H->ZZ->4leptons and H->gamma+gamma with the ATLAS detector are presented.

  7.  Functional Results-Oriented Healthcare Leadership: A Novel Leadership Model

    Directory of Open Access Journals (Sweden)

    Salem Said Al-Touby

    2012-03-01

    Full Text Available  This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  8. Microtechnology-Based Multi-Organ Models

    Directory of Open Access Journals (Sweden)

    Seung Hwan Lee

    2017-05-01

    Full Text Available Drugs affect the human body through absorption, distribution, metabolism, and elimination (ADME processes. Due to their importance, the ADME processes need to be studied to determine the efficacy and side effects of drugs. Various in vitro model systems have been developed and used to realize the ADME processes. However, conventional model systems have failed to simulate the ADME processes because they are different from in vivo, which has resulted in a high attrition rate of drugs and a decrease in the productivity of new drug development. Recently, a microtechnology-based in vitro system called “organ-on-a-chip” has been gaining attention, with more realistic cell behavior and physiological reactions, capable of better simulating the in vivo environment. Furthermore, multi-organ-on-a-chip models that can provide information on the interaction between the organs have been developed. The ultimate goal is the development of a “body-on-a-chip”, which can act as a whole body model. In this review, we introduce and summarize the current progress in the development of multi-organ models as a foundation for the development of body-on-a-chip.

  9. Microtechnology-Based Multi-Organ Models.

    Science.gov (United States)

    Lee, Seung Hwan; Sung, Jong Hwan

    2017-05-21

    Drugs affect the human body through absorption, distribution, metabolism, and elimination (ADME) processes. Due to their importance, the ADME processes need to be studied to determine the efficacy and side effects of drugs. Various in vitro model systems have been developed and used to realize the ADME processes. However, conventional model systems have failed to simulate the ADME processes because they are different from in vivo, which has resulted in a high attrition rate of drugs and a decrease in the productivity of new drug development. Recently, a microtechnology-based in vitro system called "organ-on-a-chip" has been gaining attention, with more realistic cell behavior and physiological reactions, capable of better simulating the in vivo environment. Furthermore, multi-organ-on-a-chip models that can provide information on the interaction between the organs have been developed. The ultimate goal is the development of a "body-on-a-chip", which can act as a whole body model. In this review, we introduce and summarize the current progress in the development of multi-organ models as a foundation for the development of body-on-a-chip.

  10. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  11. Recent neutron scattering results from Gd-based pyrochlore oxides

    Science.gov (United States)

    Gardner, Jason

    2009-03-01

    In my presentation I will present recent results that have determined the spin-spin correlations in the geometrically frustrated magnets Gd2Sn2O7 and Gd2Ti2O7. This will include polarised neutron diffraction, inelastic neutron scattering and neutron spin echo data. One sample of particular interest is Gd2Sn2O7 which is believed to be a good approximation to a Heisenberg antiferromagnet on a pyrochlore lattice with exchange and dipole-dipole interactions. Theoretically such a system is expected to enter long range ordered ground state known as the ``Palmer Chalker'' state [1]. We show conclusively, through neutron scattering data, that the system indeed enters an ordered state with the Palmer-Chalker spin configuration below Tc = 1 K [2-3]. Within this state we have also observed long range collective spin dynamics, spin waves. This work has been performed in collaboration with many research groups including G. Ehlers (SNS), R. Stewart (ISIS). [0pt] [1] S. E. Palmer and J. T. Chalker, Phys. Rev. B 62, 488 (2000). [0pt] [2] J. R. Stewart, G. Ehlers, A. S. Wills, S. T. Bramwell, and J. S. Gardner, J. Phys.: Condens. Matter 16, L321 (2004). [0pt] [3] J R Stewart, J S Gardner, Y. Qiu and G Ehlers, Phys. Rev. B. 78, 132410 (2008)

  12. Distributed hydrological models: comparison between TOPKAPI, a physically based model and TETIS, a conceptually based model

    Science.gov (United States)

    Ortiz, E.; Guna, V.

    2009-04-01

    The present work aims to carry out a comparison between two distributed hydrological models, the TOPKAPI (Ciarapica and Todini, 1998; Todini and Ciarapica, 2001) and TETIS (Vélez, J. J.; Vélez J. I. and Francés, F, 2002) models, obtaining the hydrological solution computed on the basis of the same storm events. The first model is physically based and the second one is conceptually based. The analysis was performed on the 21,4 km2 Goodwin Creek watershed, located in Panola County, Mississippi. This watershed extensively monitored by the Agricultural Research Service (ARS) National Sediment Laboratory (NSL) has been chosen because it offers a complete database compiling precipitation (16 rain gauges), runoff (6 discharge stations) and GIS data. Three storm events were chosen to evaluate the performance of the two models: the first one was chosen to calibrate the models, and the other two to validate them. Both models performed a satisfactory hydrological response both in calibration and validation events. While for the TOPKAPI model it wasn't a real calibration, due to its really good performance with parameters modal values derived of watershed characteristics, for the TETIS model it has been necessary to perform a previous automatic calibration. This calibration was carried out using the data provided by the observed hydrograph, in order to adjust the modeĺs 9 correction factors. Keywords: TETIS, TOPKAPI, distributed models, hydrological response, ungauged basins.

  13. High-energy radiation damage in zirconia: modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, Eva; Devanathan, Ram; Weber, William J.; Seaton, Michael; Todorov, Ilian; Nordlund, Kai; Dove, Martin T.; Trachenko, Kostya

    2014-02-28

    Zirconia has been viewed as a material of exceptional resistance to amorphization by radiation damage, and was consequently proposed as a candidate to immobilize nuclear waste and serve as a nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1-0.5 MeV energies with the account of electronic energy losses. We find that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely disjoint from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution and morphology, and discuss practical implications of using zirconia in intense radiation environments.

  14. High-energy radiation damage in zirconia: modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, Evangelia [Queen Mary, University of London; Devanathan, Ram [Pacific Northwest National Laboratory (PNNL); Weber, William J [ORNL; Seaton, M [Daresbury Laboratory, UK; Todorov, I T [Daresbury Laboratory, UK; Nordlund, Kai [University of Helsinki; Dove, Martin T [Queen Mary, University of London; Trachenko, Kostya [Queen Mary, University of London

    2014-01-01

    Zirconia is viewed as a material of exceptional resistance to amorphization by radiation damage, and consequently proposed as a candidate to immobilize nuclear waste and serve as an inert nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1-0.5 MeV energies with account of electronic energy losses. We nd that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely isolated from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution and morphology, and discuss practical implications of using zirconia in intense radiation environments.

  15. High-energy radiation damage in zirconia: Modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, E., E-mail: zarkadoulae@ornl.gov [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); SEPnet, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Devanathan, R. [Nuclear Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Weber, W. J. [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Materials Science and Engineering, University of Tennessee, Knoxville, Tennessee 37996 (United States); Seaton, M. A.; Todorov, I. T. [STFC Daresbury Laboratory, Scientific Computing Department, Keckwick Lane, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Nordlund, K. [University of Helsinki, P.O. Box 43, FIN-00014 Helsinki (Finland); Dove, M. T. [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Trachenko, K. [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); SEPnet, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom)

    2014-02-28

    Zirconia is viewed as a material of exceptional resistance to amorphization by radiation damage, and consequently proposed as a candidate to immobilize nuclear waste and serve as an inert nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1–0.5 MeV energies with account of electronic energy losses. We find that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely isolated from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution, and morphology, and discuss practical implications of using zirconia in intense radiation environments.

  16. Results on Three predictions on July 2012 Federal Elections in Mexico based on past regularities

    CERN Document Server

    Hernández-Saldaña, H

    2013-01-01

    July 2012 Presidential Election in Mexico has been the third occasion that the PREP, the Previous Electoral Results Program, works. PREP results give the voter turnout based in electoral certificates of each polling station that arrives to the capture centres. In the previous ones some statistical regularities had been observed, three of them were selected to made predictions and published in \\texttt{arXiv:1207.0078 [physics.soc-ph]}. Two of the predictions were completely fulfilled and the third one was not measured since the electoral authorities changed the information in the data base for the 2012 process. The two confirmed predictions by actual measures are: (ii) The Partido Revolucionario Institucional is a sprinter and have a better performance in polling station which arrive late in the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well.

  17. Evaluation of observation-fused regional air quality model results for population air pollution exposure estimation.

    Science.gov (United States)

    Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline

    2014-07-01

    In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRRs are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account for spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses.

  18. Evaluation of Observation-Fused Regional Air Quality Model Results for Population Air Pollution Exposure Estimation

    Science.gov (United States)

    Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline

    2014-01-01

    In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248

  19. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  20. Updating Finite Element Model of a Wind Turbine Blade Section Using Experimental Modal Analysis Results

    Directory of Open Access Journals (Sweden)

    Marcin Luczak

    2014-01-01

    Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.

  1. Assessing the agricultural costs of climate change: Combining results from crop and economic models

    Science.gov (United States)

    Howitt, R. E.

    2016-12-01

    Any perturbation to a resource system used by humans elicits both technical and behavioral changes. For agricultural production, economic criteria and their associated models are usually good predictors of human behavior in agricultural production. Estimation of the agricultural costs of climate change requires careful downscaling of global climate models to the level of agricultural regions. Plant growth models for the dominant crops are required to accurately show the full range of trade-offs and adaptation mechanisms needed to minimize the cost of climate change. Faced with the shifts in the fundamental resource base of agriculture, human behavior can either exacerbate or offset the impact of climate change on agriculture. In addition, agriculture can be an important source of increased carbon sequestration. However the effectiveness and timing of this sequestration depends on agricultural practices and farmer behavior. Plant growth models and economic models have been shown to interact in two broad fashions. First there is the direct embedding of a parametric representation plant growth simulations in the economic model production function. A second and more general approach is to have plant growth and crop process models interact with economic models as they are simulated. The development of more general wrapper programs that transfer information between models rapidly and efficiently will encourage this approach. However, this method does introduce complications in terms of matching up disparate scales both in time and space between models. Another characteristic behavioral response of agricultural production is the distinction between the intensive margin which considers the quantity of resource, for example fertilizer, used for a given crop, and the extensive margin of adjustment that measures how farmers will adjust their crop proportions in response to climate change. Ideally economic models will measure the response to both these margins of adjustment

  2. PCA-based lung motion model

    CERN Document Server

    Li, Ruijiang; Jia, Xun; Zhao, Tianyu; Lamb, James; Yang, Deshan; Low, Daniel A; Jiang, Steve B

    2010-01-01

    Organ motion induced by respiration may cause clinically significant targeting errors and greatly degrade the effectiveness of conformal radiotherapy. It is therefore crucial to be able to model respiratory motion accurately. A recently proposed lung motion model based on principal component analysis (PCA) has been shown to be promising on a few patients. However, there is still a need to understand the underlying reason why it works. In this paper, we present a much deeper and detailed analysis of the PCA-based lung motion model. We provide the theoretical justification of the effectiveness of PCA in modeling lung motion. We also prove that under certain conditions, the PCA motion model is equivalent to 5D motion model, which is based on physiology and anatomy of the lung. The modeling power of PCA model was tested on clinical data and the average 3D error was found to be below 1 mm.

  3. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  4. PICASSO VISION instrument design, engineering model test results, and flight model development status

    Science.gov (United States)

    Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe

    2016-10-01

    PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.

  5. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  6. Modelling combustion reactions for gas flaring and its resulting emissions

    Directory of Open Access Journals (Sweden)

    O. Saheed Ismail

    2016-07-01

    Full Text Available Flaring of associated petroleum gas is an age long environmental concern which remains unabated. Flaring of gas maybe a very efficient combustion process especially steam/air assisted flare and more economical than utilization in some oil fields. However, it has serious implications for the environment. This study considered different reaction types and operating conditions for gas flaring. Six combustion equations were generated using the mass balance concept with varying air and combustion efficiency. These equations were coded with a computer program using 12 natural gas samples of different chemical composition and origin to predict the pattern of emission species from gas flaring. The effect of key parameters on the emission output is also shown. CO2, CO, NO, NO2 and SO2 are the anticipated non-hydrocarbon emissions of environmental concern. Results show that the quantity and pattern of these chemical species depended on percentage excess/deficiency of stoichiometric air, natural gas type, reaction type, carbon mass content, impurities, combustion efficiency of the flare system etc. These emissions degrade the environment and human life, so knowing the emission types, pattern and flaring conditions that this study predicts is of paramount importance to governments, environmental agencies and the oil and gas industry.

  7. Modelling of a water plasma flow: I. Basic results

    Energy Technology Data Exchange (ETDEWEB)

    KotalIk, Pavel [INP Greifswald, Friedrich-Ludwig-Jahn-Strasse 19, 17489 Greifswald (Germany)

    2006-06-21

    One-fluid MHD equations are numerically solved for an axisymmetric flow of thermal water plasma inside and outside a discharge chamber of a plasma torch with water vortex stabilization of electric arc. Comparisons with experimental data and previous calculations are given. For arc currents of 300-600 A, the respective temperatures and velocities in the range 16 700-26 400 K and 2300-6900 m s{sup -1} are obtained at the centre of the nozzle exit. The flow velocity on axis increases by 1-2 km s{sup -1} in the 5 mm long nozzle. Ohmic heating and radiative losses are two competitive processes influencing most the plasma temperature and velocity. The radiative losses represent 39% to 46% of the torch power of 69-174 kW when optical thickness of 3 mm is assumed for the plasma column. In front of the cathode, inside the discharge chamber, a recirculation zone is predicted and discussed. Effects of the temperature dependence of the plasma viscosity and sound velocity and of the optical thickness are examined. It is shown that the results such as waviness of the Mach number isolines are direct consequences of these dependences. Different lengths of 55 and 60 mm of the water vortex stabilized part of the electric arc do not substantially influence the plasma temperature and velocity at the nozzle exit.

  8. Image-Based Modeling of Plants and Trees

    CERN Document Server

    Kang, Sing Bang

    2009-01-01

    Plants and trees are among the most complex natural objects. Much work has been done attempting to model them, with varying degrees of success. In this book, we review the various approaches in computer graphics, which we categorize as rule-based, image-based, and sketch-based methods. We describe our approaches for modeling plants and trees using images. Image-based approaches have the distinct advantage that the resulting model inherits the realistic shape and complexity of a real plant or tree. We use different techniques for modeling plants (with relatively large leaves) and trees (with re

  9. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...

  10. Soil and water assessment tool model calibration results for different catchment sizes in poland.

    Science.gov (United States)

    Ostojski, Mieczyslaw S; Niedbala, Jerzy; Orlinska-Wozniak, Paulina; Wilk, Pawel; Gębala, Joanna

    2014-01-01

    The watershed model SWAT (Soil and Water Assessment Tool) can be used to implement the requirements of international agreements that Poland has ratified. Among these requirements are the establishment of catchment-based, rather than administrative-based, management plans and spatial information systems. Furthermore, Polish law requires that management of water resources be based on catchment systems. This article explores the use of the SWAT model in the implementation of catchment-based water management in Poland. Specifically, the impacts of basin size on calibration and on the results of the simulation process were analyzed. SWAT was set up and calibrated for three Polish watersheds of varying sizes: (i) Gąsawka, a small basin (>593.7 km), (ii) Rega, a medium-sized basin (2766.8 km), and (iii) Warta, a large basin (54,500 km) representing about 17.4% of Polish territory. The results indicated that the size of the catchment has an impact on the calibration process and simulation outputs. Several factors influenced by the size of the catchment affected the modeling results. Among these factors are the number of measurement points within the basin and the length of the measuring period and data quality at checkpoints as determined by the position of the measuring station. It was concluded that the SWAT model is a suitable tool for the implementation of catchment-based water management in Poland regardless of watershed size. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Men's responses to HPV test results: development of a theory-based survey.

    Science.gov (United States)

    Daley, Ellen M; Buhi, Eric R; Baldwin, Julie; Lee, Ji-Hyun; Vadaparampil, Susan; Abrahamsen, Martha; Vamos, Cheryl A; Kolar, Stephanie; Chandler, Rasheeta; Anstey, Erica Hesch; Giuliano, Anna

    2009-01-01

    To develop and perform psychometric testing on an instrument designed to assess cognitive/emotional responses among men receiving HPV testing. Men enrolled in an HPV natural history study (N = 139) completed a computer-assisted survey instrument based on Leventhal's parallel processing/common-sense model. Data were analyzed using SPSS and Mplus. Reliability analyses resulted in Cronbach alpha of 0.72 (knowledge), 0.86 (perceived threat), 0.83 (self-efficacy), and 0.55 (response efficacy). A revised measurement model exhibited evidence of construct validity, as indicated by acceptable model fit statistics. To our knowledge, this is the only validated instrument assessing men's reactions to an HPV test result.

  12. INTRUSION DETECTION BASED ON THE SECOND-ORDER STOCHASTIC MODEL

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a new method based on a second-order stochastic model for computer intrusion detection. The results show that the performance of the second-order stochastic model is better than that of a first-order stochastic model. In this study, different window sizes are also used to test the performance of the model. The detection results show that the second-order stochastic model is not so sensitive to the window size, comparing with the first-order stochastic model and other previous researches. The detection result of window sizes 6 and 10 is the same.

  13. Behavior and Design Intent Based Product Modeling

    Directory of Open Access Journals (Sweden)

    László Horváth

    2004-11-01

    Full Text Available A knowledge based modeling of mechanical products is presented for industrial CAD/CAM systems. An active model is proposed that comprise knowledge from modeling procedures, generic part models and engineers. Present day models of mechanical systems do not contain data about the background of human decisions. This situation motivated the authors at their investigations on exchange design intent information between engineers. Their concept was extending of product models to be capable of description of design intent information. Several human-computer and human-human communication issues were considered. The complex communication problem has been divided into four sub-problems, namely communication of human intent source with the computer system, representation of human intent, exchange of intent data between modeling procedures and communication of the represented intent with humans. Paper discusses the scenario of intelligent modeling based engineering. Then key concepts for the application of computational intelligence in computer model based engineering systems are detailed including knowledge driven models as well as areas of their application. Next, behavior based models with intelligent content involving specifications and knowledge for the design processes are emphasized and an active part modeling is proposed and possibilities for its application are outlined. Finally, design intent supported intelligent modeling is discussed.

  14. Physiologically Based Pharmacokinetic (PBPK) Modeling of ...

    Science.gov (United States)

    Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, inter-individual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data.Objectives: To evaluate the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and one hybrid mouse strains to calibrate and extend existing physiologically-based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). A Bayesian population analysis of inter-strain variability was used to quantify variability in TCE metabolism. Results: Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation was less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production was less variable (2-fold range) than DCA production (5-fold range), although uncertainty bounds for DCA exceeded the predicted variability. Conclusions:

  15. Model-based phase-shifting interferometer

    Science.gov (United States)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  16. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  17. A mathematical model and simulation results of plasma enhanced chemical vapor deposition of silicon nitride films

    Science.gov (United States)

    Konakov, S. A.; Krzhizhanovskaya, V. V.

    2015-01-01

    We developed a mathematical model of Plasma Enhanced Chemical Vapor Deposition (PECVD) of silicon nitride thin films from SiH4-NH3-N2-Ar mixture, an important application in modern materials science. Our multiphysics model describes gas dynamics, chemical physics, plasma physics and electrodynamics. The PECVD technology is inherently multiscale, from macroscale processes in the chemical reactor to atomic-scale surface chemistry. Our macroscale model is based on Navier-Stokes equations for a transient laminar flow of a compressible chemically reacting gas mixture, together with the mass transfer and energy balance equations, Poisson equation for electric potential, electrons and ions balance equations. The chemical kinetics model includes 24 species and 58 reactions: 37 in the gas phase and 21 on the surface. A deposition model consists of three stages: adsorption to the surface, diffusion along the surface and embedding of products into the substrate. A new model has been validated on experimental results obtained with the "Plasmalab System 100" reactor. We present the mathematical model and simulation results investigating the influence of flow rate and source gas proportion on silicon nitride film growth rate and chemical composition.

  18. Comparing repetition-based melody segmentation models

    NARCIS (Netherlands)

    Rodríguez López, M.E.; de Haas, Bas; Volk, Anja

    2014-01-01

    This paper reports on a comparative study of computational melody segmentation models based on repetition detection. For the comparison we implemented five repetition-based segmentation models, and subsequently evaluated their capacity to automatically find melodic phrase boundaries in a corpus of 2

  19. A Role-Based Fuzzy Assignment Model

    Institute of Scientific and Technical Information of China (English)

    ZUO Bao-he; FENG Shan

    2002-01-01

    It's very important to dynamically assign the tasks to corresponding actors in workflow management system, especially in complex applications. This improves the flexibility of workflow systems.In this paper, a role-based workflow model with fuzzy optimized intelligent assignment is proposed and applied in the investment management system. A groupware-based software model is also proposed.

  20. Comparison of measurements and model results for airborne sulphur and nitrogen components with kriging

    Energy Technology Data Exchange (ETDEWEB)

    Schaug, J.; Iversen, T.; Pedersen, U. (Norwegian Institute for Air Research, Lillestroem (Norway). Chemical Coordinating Centre of EMEP)

    1993-04-01

    Comparisons have been made between calculations from the Lagrangian model for acid deposition at Meteorological Synthesizing Centre-West (MSC-W) of EMEP and measurements at EMEP sites. Annual averages of aerosol sulphate, sulphate in precipitation and nitrate in precipitation were calculated and compared for selected sites. Site selection was based on data completeness and on results from EMEP interlaboratory exercises. The comparison for sulphates in precipitation and air led to a model underestimation in the north and model overestimation in a belt through the major source regions in central Europe. The comparisons also indicate irregularities at some sites which may be due to influence from local sources, or the data quality, although this is not substantiated. The model estimates of nitrate in precipitation compare well with the measurements, although some characteristic differences occur also for this component. 21 refs., 11 figs., 2 tabs.

  1. Key-Based Data Model

    Science.gov (United States)

    1994-05-16

    prgresive repetition. It is used, principally , to train small units to pefom tasks requiring a high degree of teamwork, such as fire and maneuver actions in...an adminitative structure that has a mission. An established need based on a valid deficiency in an administrative structure with a mission. Person A

  2. Genetic programming-based chaotic time series modeling

    Institute of Scientific and Technical Information of China (English)

    张伟; 吴智铭; 杨根科

    2004-01-01

    This paper proposes a Genetic Programming-Based Modeling (GPM) algorithm on chaotic time series. GP is used here to search for appropriate model structures in function space, and the Particle Swarm Optimization (PSO) algorithm is used for Nonlinear Parameter Estimation (NPE) of dynamic model structures. In addition, GPM integrates the results of Nonlinear Time Series Analysis (NTSA) to adjust the parameters and takes them as the criteria of established models. Experiments showed the effectiveness of such improvements on chaotic time series modeling.

  3. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  4. Python-Based Applications for Hydrogeological Modeling

    Science.gov (United States)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The

  5. ALC: automated reduction of rule-based models

    Directory of Open Access Journals (Sweden)

    Gilles Ernst

    2008-10-01

    Full Text Available Abstract Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.

  6. Extending positive C-LASS results across multiple instructors and multiple classes of Modeling Instruction

    CERN Document Server

    Brewe, Eric; de la Garza, Jorge; Kramer, Laird H

    2013-01-01

    We report on a multi year study of student attitudes measured with the Colorado Learning Attitudes about Science Survey (C-LASS) in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed improved attitudes from pre to post course. Cohen's d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d = 0.45, with a 95% confidence interval of (0.26 to 0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  7. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  8. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  9. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  10. Implementing the Simple Biosphere Model (SiB) in a general circulation model: Methodologies and results

    Science.gov (United States)

    Sato, N.; Sellers, P. J.; Randall, D. A.; Schneider, E. K.; Shukla, J.; Kinter, J. L., III; Hou, Y.-T.; Albertazzi, E.

    1989-01-01

    The Simple Biosphere MOdel (SiB) of Sellers et al., (1986) was designed to simulate the interactions between the Earth's land surface and the atmosphere by treating the vegetation explicitly and relistically, thereby incorporating biophysical controls on the exchanges of radiation, momentum, sensible and latent heat between the two systems. The steps taken to implement SiB in a modified version of the National Meteorological Center's spectral GCM are described. The coupled model (SiB-GCM) was used with a conventional hydrological model (Ctl-GCM) to produce summer and winter simulations. The same GCM was used with a conventional hydrological model (Ctl-GCM) to produce comparable 'control' summer and winter variations. It was found that SiB-GCM produced a more realistic partitioning of energy at the land surface than Ctl-GCM. Generally, SiB-GCM produced more sensible heat flux and less latent heat flux over vegetated land than did Ctl-GCM and this resulted in the development of a much deeper daytime planetary boundary and reduced precipitation rates over the continents in SiB-GCM. In the summer simulation, the 200 mb jet stream and the wind speed at 850 mb were slightly weakened in the SiB-GCM relative to the Ctl-GCM results and equivalent analyses from observations.

  11. Lessons from wet gas flow metering systems using differential measurements devices: Testing and flow modelling results

    Energy Technology Data Exchange (ETDEWEB)

    Cazin, J.; Couput, J.P.; Dudezert, C. et al

    2005-07-01

    A significant number of wet gas meters used for high GVF and very high GVF are based on differential pressure measurements. Recent high pressure tests performed on a variety of different DP devices on different flow loops are presented. Application of existing correlations is discussed for several DP devices including Venturi meters. For Venturi meters, deviations vary from 9% when using the Murdock correlation to less than 3 % with physical based models. The use of DP system in a large domain of conditions (Water Liquid Ratio) especially for liquid estimation will require information on the WLR This obviously raises the question of the gas and liquid flow metering accuracy in wet gas meters and highlight needs to understand AP systems behaviour in wet gas flows (annular / mist / annular mist). As an example, experimental results obtained on the influence of liquid film characteristics on a Venturi meter are presented. Visualizations of the film upstream and inside the Venturi meter are shown. They are completed by film characterization. The AP measurements indicate that for a same Lockhart Martinelli parameter, the characteristics of the two phase flow have a major influence on the correlation coefficient. A 1D model is defined and the results are compared with the experiments. These results indicate that the flow regime influences the AP measurements and that a better modelling of the flow phenomena is needed even for allocation purposes. Based on that, lessons and way forward in wet gas metering systems improvement for allocation and well metering are discussed and proposed. (author) (tk)

  12. 3D Object Recognition Based on Linear Lie Algebra Model

    Institute of Scientific and Technical Information of China (English)

    LI Fang-xing; WU Ping-dong; SUN Hua-fei; PENG Lin-yu

    2009-01-01

    A surface model called the fibre bundle model and a 3D object model based on linear Lie algebra model are proposed.Then an algorithm of 3D object recognition using the linear Lie algebra models is presented.It is a convenient recognition method for the objects which are symmetric about some axis.By using the presented algorithm,the representation matrices of the fibre or the base curve from only finite points of the linear Lie algebra model can be obtained.At last some recognition results of practicalities are given.

  13. Coupling Landform Evolution and Soil Pedogenesis - Initial Results From the SSSPAM5D Model

    Science.gov (United States)

    Willgoose, G. R.; Welivitiya, W. D. D. P.; Hancock, G. R.; Cohen, S.

    2015-12-01

    Evolution of soil on a dynamic landform is a crucial next step in landscape evolution modelling. Some attempts have been taken such as MILESD by Vanwalleghem et al. to develop a first model which is capable of simultaneously evolving both the soil profile and the landform. In previous work we have presented physically based models for soil pedogenesis, mARM and SSSPAM. In this study we present the results of coupling a landform evolution model with our SSSPAM5D soil pedogenesis model. In previous work the SSSPAM5D soil evolution model was used to identify trends of the soil profile evolution on a static landform. Two pedogenetic processes, namely (1) armouring due to erosion, and (2) physical and chemical weathering were used in those simulations to evolve the soil profile. By incorporating elevation changes (due to erosion and deposition) we have advanced the SSSPAM5D modelling framework into the realm of landscape evolution. Simulations have been run using elevation and soil grading data of the engineered landform (spoil heap) at the Ranger Uranium Mine, Northern Territory, Australia. The results obtained for the coupled landform-soil evolution simulations predict the erosion of high slope areas, development of rudimentary channel networks in the landform and deposition of sediments in lowland areas, and qualitatively consistent with landform evolution models on their own. Examination of the soil profile characteristics revealed that hill crests are weathering dominated and tend to develop a thick soil layer. The steeper hillslopes at the edge of the landform are erosion dominated with shallow soils while the foot slopes are deposition dominated with thick soil layers. The simulation results of our coupled landform and soil evolution model provide qualitatively correct and timely characterization of the soil evolution on a dynamic landscape. Finally we will compare the characteristics of erosion and deposition predicted by the coupled landform-soil SSSPAM

  14. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  15. Results comparison and model validation for flood loss functions in Australian geographical conditions

    Science.gov (United States)

    Hasanzadeh Nafari, R.; Ngo, T.; Lehman, W.

    2015-06-01

    Rapid urbanisation, climate change and unsustainable developments are increasing the risk of floods, namely flood frequency and intensity. Flood is a frequent natural hazard that has significant financial consequences for Australia. The emergency response system in Australia is very successful and has saved many lives over the years. However, the preparedness for natural disaster impacts in terms of loss reduction and damage mitigation has been less successful. This study aims to quantify the direct physical damage to residential structures that are prone to flood phenomena in Australia. In this paper, the physical consequences of two floods from Queensland have been simulated, and the results have been compared with the performance of two selected methodologies and one newly derived model. Based on this analysis, the adaptability and applicability of the selected methodologies will be assessed in terms of Australian geographical conditions. Results obtained from the new empirically-based function and non-adapted methodologies indicate that it is apparent that the precision of flood damage models are strongly dependent on selected stage damage curves, and flood damage estimation without model validation results in inaccurate prediction of losses. Therefore, it is very important to be aware of the associated uncertainties in flood risk assessment, especially if models have not been adapted with real damage data.

  16. Mars 2020 Model Based Systems Engineering Pilot

    Science.gov (United States)

    Dukes, Alexandra Marie

    2017-01-01

    The pilot study is led by the Integration Engineering group in NASA's Launch Services Program (LSP). The Integration Engineering (IE) group is responsible for managing the interfaces between the spacecraft and launch vehicle. This pilot investigates the utility of Model-Based Systems Engineering (MBSE) with respect to managing and verifying interface requirements. The main objectives of the pilot are to model several key aspects of the Mars 2020 integrated operations and interface requirements based on the design and verification artifacts from Mars Science Laboratory (MSL) and to demonstrate how MBSE could be used by LSP to gain further insight on the interface between the spacecraft and launch vehicle as well as to enhance how LSP manages the launch service. The method used to accomplish this pilot started through familiarization of SysML, MagicDraw, and the Mars 2020 and MSL systems through books, tutorials, and NASA documentation. MSL was chosen as the focus of the model since its processes and verifications translate easily to the Mars 2020 mission. The study was further focused by modeling specialized systems and processes within MSL in order to demonstrate the utility of MBSE for the rest of the mission. The systems chosen were the In-Flight Disconnect (IFD) system and the Mass Properties process. The IFD was chosen as a system of focus since it is an interface between the spacecraft and launch vehicle which can demonstrate the usefulness of MBSE from a system perspective. The Mass Properties process was chosen as a process of focus since the verifications for mass properties occur throughout the lifecycle and can demonstrate the usefulness of MBSE from a multi-discipline perspective. Several iterations of both perspectives have been modeled and evaluated. While the pilot study will continue for another 2 weeks, pros and cons of using MBSE for LSP IE have been identified. A pro of using MBSE includes an integrated view of the disciplines, requirements, and

  17. IP Network Management Model Based on NGOSS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jin-yu; LI Hong-hui; LIU Feng

    2004-01-01

    This paper addresses a management model for IP network based on Next Generation Operation Support System (NGOSS). It makes the network management on the base of all the operation actions of ISP, It provides QoS to user service through the whole path by providing end-to-end Service Level Agreements (SLA) management through whole path. Based on web and coordination technology, this paper gives an implement architecture of this model.

  18. Satellite data for systematic validation of wave model results in the Black Sea

    Science.gov (United States)

    Behrens, Arno; Staneva, Joanna

    2017-04-01

    The Black Sea is with regard to the availability of traditional in situ wave measurements recorded by usual waverider buoys a data sparse semi-enclosed sea. The only possibility for systematic validations of wave model results in such a regional area is the use of satellite data. In the frame of the COPERNICUS Marine Evolution System for the Black Sea that requires wave predictions, the third-generation spectral wave model WAM is used. The operational system is demonstrated based on four years' systematic comparisons with satellite data. The aim of this investigation was to answer two questions. Is the wave model able to provide a reliable description of the wave conditions in the Black Sea and are the satellite measurements suitable for validation purposes on such a regional scale ? Detailed comparisons between measured data and computed model results for the Black Sea including yearly statistics have been done for about 300 satellite overflights per year. The results discussed the different verification schemes needed to review the forecasting skills of the operational system. The good agreement between measured and modeled data supports the expectation that the wave model provides reasonable results and that the satellite data is of good quality and offer an appropriate validation alternative to buoy measurements. This is the required step towards further use of those satellite data for assimilation into the wave fields to improve the wave predictions. Additional support for the good quality of the wave predictions is provided by comparisons between ADCP measurements that are available for a short time period in February 2012 and the corresponding model results at a location near the Bulgarian coast in the western Black Sea. Sensitivity tests with different wave model options and different driving wind fields have been done which identify the appropriate model configuration that provides the best wave predictions. In addition to the comparisons between measured

  19. The response of an equatorial ocean to simple wind stress patterns. I - Model formulation and analytic results. II - Numerical results

    Science.gov (United States)

    Cane, M. A.

    1979-01-01

    A time-dependent, primitive equation, beta plane model that is two-dimensional in the horizontal has been developed to model wind-driven equatorial ocean circulation. A simple vertical structure consisting of two layers above the thermocline with the same constant density permits a steady-state undercurrent in the model. An analytical study of the linear dynamics of the model suggests that the addition of inertial effects is needed to simulate the undercurrent properly. Also, both linear and nonlinear dynamics of the model are investigated numerically. Such nonlinear response to wind stress as a strong eastward equatorial undercurrent and an intense eastward 'countercurrent' at three deg N are noted in the numerical results.

  20. Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.

    Science.gov (United States)

    Sabatier, Rodolphe; Doyen, Luc; Tichit, Muriel

    2012-01-01

    Effects of agri-environment schemes (AES) on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.

  1. Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.

    Directory of Open Access Journals (Sweden)

    Rodolphe Sabatier

    Full Text Available Effects of agri-environment schemes (AES on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.

  2. A Multiobjective Optimization Including Results of Life Cycle Assessment in Developing Biorenewables-Based Processes.

    Science.gov (United States)

    Helmdach, Daniel; Yaseneva, Polina; Heer, Parminder K; Schweidtmann, Artur M; Lapkin, Alexei A

    2017-09-22

    A decision support tool has been developed that uses global multiobjective optimization based on 1) the environmental impacts, evaluated within the framework of full life cycle assessment; and 2) process costs, evaluated by using rigorous process models. This approach is particularly useful in developing biorenewable-based energy solutions and chemicals manufacturing, for which multiple criteria must be evaluated and optimization-based decision-making processes are particularly attractive. The framework is demonstrated by using a case study of the conversion of terpenes derived from biowaste feedstocks into reactive intermediates. A two-step chemical conversion/separation sequence was implemented as a rigorous process model and combined with a life cycle model. A life cycle inventory for crude sulfate turpentine was developed, as well as a conceptual process of its separation into pure terpene feedstocks. The performed single- and multiobjective optimizations demonstrate the functionality of the optimization-based process development and illustrate the approach. The most significant advance is the ability to perform multiobjective global optimization, resulting in identification of a region of Pareto-optimal solutions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Flood forecasting for River Mekong with data-based models

    Science.gov (United States)

    Shahzad, Khurram M.; Plate, Erich J.

    2014-09-01

    In many regions of the world, the task of flood forecasting is made difficult because only a limited database is available for generating a suitable forecast model. This paper demonstrates that in such cases parsimonious data-based hydrological models for flood forecasting can be developed if the special conditions of climate and topography are used to advantage. As an example, the middle reach of River Mekong in South East Asia is considered, where a database of discharges from seven gaging stations on the river and 31 rainfall stations on the subcatchments between gaging stations is available for model calibration. Special conditions existing for River Mekong are identified and used in developing first a network connecting all discharge gages and then models for forecasting discharge increments between gaging stations. Our final forecast model (Model 3) is a linear combination of two structurally different basic models: a model (Model 1) using linear regressions for forecasting discharge increments, and a model (Model 2) using rainfall-runoff models. Although the model based on linear regressions works reasonably well for short times, better results are obtained with rainfall-runoff modeling. However, forecast accuracy of Model 2 is limited by the quality of rainfall forecasts. For best results, both models are combined by taking weighted averages to form Model 3. Model quality is assessed by means of both persistence index PI and standard deviation of forecast error.

  4. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  5. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  6. An Enhanced Box-Wing Solar Radiation pressure model for BDS and initial results

    Science.gov (United States)

    Zhao, Qunhe; Wang, Xiaoya; Hu, Xiaogong; Guo, Rui; Shang, Lin; Tang, Chengpan; Shao, Fan

    2016-04-01

    Solar radiation pressure forces are the largest non-gravitational perturbations acting on GNSS satellites, which is difficult to be accurately modeled due to the complicated and changing satellite attitude and unknown surface material characteristics. By the end of 2015, there are more than 50 stations of the Multi-GNSS Experiment(MGEX) set-up by the IGS. The simple box-plate model relies on coarse assumptions about the dimensions and optical properties of the satellite due to lack of more detailed information. So, a physical model based on BOX-WING model is developed, which is more sophisticated and more detailed physical structure has been taken into account, then calculating pressure forces according to the geometric relations between light rays and surfaces. All the MGEX stations and IGS core stations had been processed for precise orbit determination tests with GPS and BDS observations. Calculation range covers all the two kinds of Eclipsing and non-eclipsing periods in 2015, and we adopted the un-differential observation mode and more accurate values of satellite phase centers. At first, we tried nine parameters model, and then eliminated the parameters with strong correlation between them, came into being five parameters of the model. Five parameters were estimated, such as solar scale, y-bias, three material coefficients of solar panel, x-axis and z-axis panels. Initial results showed that, in the period of yaw-steering mode, use of Enhanced ADBOXW model results in small improvement for IGSO and MEO satellites, and the Root-Mean-Square(RMS) error value of one-day arc orbit decreased by about 10%~30% except for C08 and C14. The new model mainly improved the along track acceleration, up to 30% while in the radial track was not obvious. The Satellite Laser Ranging(SLR) validation showed, however, that this model had higher prediction accuracy in the period of orbit-normal mode, compared to GFZ multi-GNSS orbit products, as well with relative post

  7. Method for evaluating prediction models that apply the results of randomized trials to individual patients

    Directory of Open Access Journals (Sweden)

    Kattan Michael W

    2007-06-01

    Full Text Available Abstract Introduction The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results. Methods We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy. Results For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of

  8. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  9. Higher plant modelling for life support applications: first results of a simple mechanistic model

    Science.gov (United States)

    Hezard, Pauline; Dussap, Claude-Gilles; Sasidharan L, Swathy

    2012-07-01

    In the case of closed ecological life support systems, the air and water regeneration and food production are performed using microorganisms and higher plants. Wheat, rice, soybean, lettuce, tomato or other types of eatable annual plants produce fresh food while recycling CO2 into breathable oxygen. Additionally, they evaporate a large quantity of water, which can be condensed and used as potable water. This shows that recycling functions of air revitalization and food production are completely linked. Consequently, the control of a growth chamber for higher plant production has to be performed with efficient mechanistic models, in order to ensure a realistic prediction of plant behaviour, water and gas recycling whatever the environmental conditions. Purely mechanistic models of plant production in controlled environments are not available yet. This is the reason why new models must be developed and validated. This work concerns the design and test of a simplified version of a mathematical model coupling plant architecture and mass balance purposes in order to compare its results with available data of lettuce grown in closed and controlled chambers. The carbon exchange rate, water absorption and evaporation rate, biomass fresh weight as well as leaf surface are modelled and compared with available data. The model consists of four modules. The first one evaluates plant architecture, like total leaf surface, leaf area index and stem length data. The second one calculates the rate of matter and energy exchange depending on architectural and environmental data: light absorption in the canopy, CO2 uptake or release, water uptake and evapotranspiration. The third module evaluates which of the previous rates is limiting overall biomass growth; and the last one calculates biomass growth rate depending on matter exchange rates, using a global stoichiometric equation. All these rates are a set of differential equations, which are integrated with time in order to provide

  10. Simulation-based Manufacturing System Modeling

    Institute of Scientific and Technical Information of China (English)

    卫东; 金烨; 范秀敏; 严隽琪

    2003-01-01

    In recent years, computer simulation appears to be very advantageous technique for researching the resource-constrained manufacturing system. This paper presents an object-oriented simulation modeling method, which combines the merits of traditional methods such as IDEF0 and Petri Net. In this paper, a four-layer-one-angel hierarchical modeling framework based on OOP is defined. And the modeling description of these layers is expounded, such as: hybrid production control modeling and human resource dispatch modeling. To validate the modeling method, a case study of an auto-product line in a motor manufacturing company has been carried out.

  11. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  12. Discrete Element Simulation of Asphalt Mastics Based on Burgers Model

    Institute of Scientific and Technical Information of China (English)

    LIU Yu; FENG Shi-rong; HU Xia-guang

    2007-01-01

    In order to investigate the viscoelastic performance of asphalt mastics, a micro-mechanical model for asphalt mastics was built by applying Burgers model to discrete element simulation and constructing Burgers contact model. Then the numerical simulation of creep tests was conducted, and results from the simulation were compared with the analytical solution for Burgers model. The comparision snowed that the two results agreed well with each other, suggesting that discrete element model based on Burgers model could be employed in the numerical simulation for asphalt mastics.

  13. Daily air quality forecast (gases and aerosols) over Switzerland. Modeling tool description and first results analysis.

    Science.gov (United States)

    Couach, O.; Kirchner, F.; Porchet, P.; Balin, I.; Parlange, M.; Balin, D.

    2009-04-01

    Map3D, the acronym for "Mesoscale Air Pollution 3D modelling", was developed at the EFLUM laboratory (EPFL) and received an INNOGRANTS awards in Summer 2007 in order to move from a research phase to a professional product giving daily air quality forecast. It is intended to give an objective base for political decisions addressing the improvement of regional air quality. This tool is a permanent modelling system which provides daily forecast of the local meteorology and the air pollutant (gases and particles) concentrations. Map3D has been successfully developed and calculates each day at the EPFL site a three days air quality forecast over Europe and the Alps with 50 km and 15 km resolution, respectively (see http://map3d.epfl.ch). The Map3D user interface is a web-based application with a PostgreSQL database. It is written in object-oriented PHP5 on a MVC (Model-View-Controller) architecture. Our prediction system is operational since August 2008. A first validation of the calculations for Switzerland is performed for the period of August 2008 - January 2009 comparing the model results for O3, NO2 and particulates with the results of the Nabel measurements stations. The subject of air pollution regimes (NOX/VOC) and specific indicators application with the forecast will be also addressed.

  14. Modeling Web-based Educational Systems: Process Design Teaching Model

    Directory of Open Access Journals (Sweden)

    Elena Rokou

    2004-01-01

    Full Text Available Using modeling languages is essential to the construction of educational systems based on software engineering principles and methods. Furthermore, the instructional design is undoubtedly the cornerstone of the design and development of educational systems. Although several methodologies and languages have been proposed for the specification of isolated educational multimedia systems, none has optimum results for the description of these systems and, especially, for their pedagogical aspect. Of course this is due primarily to how these systems function and are applied; it is not due to the language itself, although its special characteristics contribute substantially to the development of these systems sometimes positively and sometimes negatively. In this paper, we briefly describe the introduction of stereotypes to the pedagogical design of educational systems and appropriate modifications of the existing package diagrams of UML (Unified Modeling Language. The main objective of these new stereotypes is to describe sufficiently the mechanisms of generation, monitoring and re-adapting of teaching and student’s models which can be used in the educational applications.

  15. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  16. Geometric deviation modeling by kinematic matrix based on Lagrangian coordinate

    Science.gov (United States)

    Liu, Weidong; Hu, Yueming; Liu, Yu; Dai, Wanyi

    2015-09-01

    Typical representation of dimension and geometric accuracy is limited to the self-representation of dimension and geometric deviation based on geometry variation thinking, yet the interactivity affection of geometric variation and gesture variation of multi-rigid body is not included. In this paper, a kinematic matrix model based on Lagrangian coordinate is introduced, with the purpose of unified model for geometric variation and gesture variation and their interactive and integrated analysis. Kinematic model with joint, local base and movable base is built. The ideal feature of functional geometry is treated as the base body; the fitting feature of functional geometry is treated as the adjacent movable body; the local base of the kinematic model is fixed onto the ideal geometry, and the movable base of the kinematic model is fixed onto the fitting geometry. Furthermore, the geometric deviation is treated as relative location or rotation variation between the movable base and the local base, and it's expressed by the Lagrangian coordinate. Moreover, kinematic matrix based on Lagrangian coordinate for different types of geometry tolerance zones is constructed, and total freedom for each kinematic model is discussed. Finally, the Lagrangian coordinate library, kinematic matrix library for geometric deviation modeling is illustrated, and an example of block and piston fits is introduced. Dimension and geometric tolerances of the shaft and hole fitting feature are constructed by kinematic matrix and Lagrangian coordinate, and the results indicate that the proposed kinematic matrix is capable and robust in dimension and geometric tolerances modeling.

  17. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  18. A New Model of Ultracapacitors Based on Fractal Fundamentals

    Directory of Open Access Journals (Sweden)

    Xiaodong Zhang

    2014-01-01

    Full Text Available An intuitive model is proposed in this paper to describe the electrical behavior of certain ultracapacitors. The model is based on a simple expression that can be fully characterized by five real numbers. In this paper, the measured impedances of three ultracapacitors as a function of frequency are compared to model results. There is good agreement between the model and measurements. Results presented in a previous study are also reviewed and the paper demonstrates that those results are also consistent with the newly described model.

  19. 以家庭为中心的护理模式在膀胱癌根治尿路造口患者中的应用%Application of the family - based nursing model in patients with urinary track stoma resulting from radical operation for bladder cancer

    Institute of Scientific and Technical Information of China (English)

    洪含霞; 向爱华

    2016-01-01

    Objective:To investigate the effect of the family - based nursing model in patients with urinary track stoma resulting from radical operation for bladder cancer. Methods:81 patients with urinary track stoma after radical operation for bladder cancer were randomly divided into the intervention group(n = 41)and the control group(n = 40). The traditional nursing model was implemented in the control group and the family - based nursing model was adopted in the intervention group,the health - related quality of life(HRQOL)of the pa-tients was compared between the two groups. Results:The HRQOL score was higher in the intervention group than the control group after operation for 3 and 6 months(P < 0. 05). Conclusion:The family - based nursing model can significantly improve the HRQOL of the pa-tients with urinary track stoma after radical operation for bladder cancer.%目的:探讨以家庭为中心的护理模式(FCC)对膀胱癌根治尿路造口患者中的应用效果。方法:将81膀胱癌根治尿路造口患者随机分为干预组41例和对照组40例,对照组采用传统护理模式,干预组开展 FCC,比较两组患者健康相关生活质量(HRQOL)。结果:干预组患者术后3、6个月 HRQOL 评分高于对照组(P <0.05)。结论:FCC 可显著提高膀胱癌根治尿路造口患者的 HRQOL。

  20. Power-Based Setpoint Control : Experimental Results on a Planar Manipulator

    NARCIS (Netherlands)

    Dirksz, D. A.; Scherpen, J. M. A.

    2012-01-01

    In the last years the power-based modeling framework, developed in the sixties to model nonlinear electrical RLC networks, has been extended for modeling and control of a larger class of physical systems. In this brief we apply power-based integral control to a planar manipulator experimental setup.

  1. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  2. Instance-Based Generative Biological Shape Modeling.

    Science.gov (United States)

    Peng, Tao; Wang, Wei; Rohde, Gustavo K; Murphy, Robert F

    2009-01-01

    Biological shape modeling is an essential task that is required for systems biology efforts to simulate complex cell behaviors. Statistical learning methods have been used to build generative shape models based on reconstructive shape parameters extracted from microscope image collections. However, such parametric modeling approaches are usually limited to simple shapes and easily-modeled parameter distributions. Moreover, to maximize the reconstruction accuracy, significant effort is required to design models for specific datasets or patterns. We have therefore developed an instance-based approach to model biological shapes within a shape space built upon diffeomorphic measurement. We also designed a recursive interpolation algorithm to probabilistically synthesize new shape instances using the shape space model and the original instances. The method is quite generalizable and therefore can be applied to most nuclear, cell and protein object shapes, in both 2D and 3D.

  3. Models of adopting the convicted to the imprisonment conditions – the results of my own research

    Directory of Open Access Journals (Sweden)

    Dorota Kanarek-Lizik

    2013-06-01

    Full Text Available Convicted who are sent to penitentiary, units in order to serve a sentence of imprisonment, are obliged to choose a proper technique (model of coping with the imprisonment discomfort and the way of minimizing discrepancy between the restricted and the outer world at the same time. In order to know these techniques, there has been a special questionnaire written which applies to a model of adopting the convicted to the imprisonment conditions. This questionnaire is based on the types of adaptations enumerated by E. Goffman and these are withdrawal, rebellion, settling down, cold calculation and conversion. In this article I introduced the results of my own research that concern the models of adopting the convicted to the imprisonment conditions. The survey included recidivists and the adults who serve a sentence of imprisonment for the first time.

  4. ORIGEN2 model and results for the Clinch River Breeder Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A G; Bjerke, M A

    1982-06-01

    Reactor physics calculations and literature information acquisition have led to the development of a Clinch River Breeder Reactor (CRBR) model for the ORIGEN2 computer code. The model is based on cross sections taken directly from physics codes. Details are presented concerning the physical description of the fuel assemblies, the fuel management scheme, irradiation parameters, and initial material compositions. The ORIGEN2 model for the CRBR has been implemented, resulting in the production of graphical and tabular characteristics (radioactivity, thermal power, and toxicity) of CRBR spent fuel, high-level waste, and fuel-assembly structural material waste as a function of decay time. Characteristics for pressurized water reactors (PWRs), commercial liquid-metal fast breeder reactors (LMFBRs), and the Fast Flux Test Facility (FFTF) have also been included in this report for comparison with the CRBR data.

  5. Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results

    Directory of Open Access Journals (Sweden)

    Robert V. Mayer

    2016-06-01

    Full Text Available The use of method of imitational modeling to study forming the empirical knowledge in pupil’s consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1 the facts established in everyday life; 2 the facts, which the pupil can experimentally establish at a physics lesson; 3 the facts which are studied only on the theoretical level (speculative or ideally. The determination of the forgetting coefficients of the facts of the first, second and third categories and coordination of imitating model with distribution of empirical information in the school physics course and testing results is carried out. The graphs of dependence of empirical knowledge for various physics sections and facts categories on time are given.

  6. Numerical simulation of base flow with hot base bleed for two jet models

    Institute of Scientific and Technical Information of China (English)

    Wen-jie YU; Yong-gang YU; Bin NI

    2014-01-01

    In order to improve the benefits of base bleed in base flow field, the base flow with hot base bleed for two jet models is studied. Two-dimensional axisymmetric NaviereStokes equations are computed by using a finite volume scheme. The base flow of a cylinder afterbody with base bleed is simulated. The simulation results are validated with the experimental data, and the experimental results are well reproduced. On this basis, the base flow fields with base bleed for a circular jet model and an annulus jet model are investigated by selecting the injection temperature from 830 K to 2200 K. The results show that the base pressure of the annular jet model is higher than that of the circular jet model with the changes of the injection parameter and the injection temperature. For the circular jet model, the hot gases are concentrated in the vicinity of the base. For the annular jet model, the bleed gases flow into the shear layer directly so that the hot gases are concentrated in the shear layer. The latter temperature distribution is better for the increase of base pressure.

  7. Energetics based spike generation of a single neuron: simulation results and analysis

    Directory of Open Access Journals (Sweden)

    Nagarajan eVenkateswaran

    2012-02-01

    Full Text Available Existing current based models that capture spike activity, though useful in studying information processing capabilities of neurons, fail to throw light on their internal functioning. It is imperative to develop a model that captures the spike train of a neuron as a function of its intra cellular parameters for non-invasive diagnosis of diseased neurons. This is the first ever article to present such an integrated model that quantifies the inter-dependency between spike activity and intra cellular energetics. The generated spike trains from our integrated model will throw greater light on the intra-cellular energetics than existing current models. Now, an abnormality in the spike of a diseased neuron can be linked and hence effectively analyzed at the energetics level. The spectral analysis of the generated spike trains in a time-frequency domain will help identify abnormalities in the internals of a neuron. As a case study, the parameters of our model are tuned for Alzheimer disease and its resultant spike trains are studied and presented.

  8. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  9. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  10. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    Science.gov (United States)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  11. Implementing a continuum of care model for older people - results from a Swedish case study

    Directory of Open Access Journals (Sweden)

    Anna Duner

    2011-11-01

    Full Text Available Introduction: There is a need for integrated care and smooth collaboration between care-providing organisations and professions to create a continuum of care for frail older people. However, collaboration between organisations and professions is often problematic. The aim of this study was to examine the process of implementing a new continuum of care model in a complex organisational context, and illuminate some of the challenges involved. The introduced model strived to connect three organisations responsible for delivering health and social care to older people: the regional hospital, primary health care and municipal eldercare.Methods: The actions of the actors involved in the process of implementing the model were understood to be shaped by the actors' understanding, commitment and ability. This article is based on 44 qualitative interviews performed on four occasions with 26 key actors at three organisational levels within these three organisations.Results and conclusions: The results point to the importance of paying regard to the different cultures of the organisations when implementing a new model. The role of upper management emerged as very important. Furthermore, to be accepted, the model has to be experienced as effectively dealing with real problems in the everyday practice of the actors in the organisations, from the bottom to the top.

  12. Implementing a continuum of care model for older people - results from a Swedish case study

    Directory of Open Access Journals (Sweden)

    Anna Duner

    2011-11-01

    Full Text Available Introduction: There is a need for integrated care and smooth collaboration between care-providing organisations and professions to create a continuum of care for frail older people. However, collaboration between organisations and professions is often problematic. The aim of this study was to examine the process of implementing a new continuum of care model in a complex organisational context, and illuminate some of the challenges involved. The introduced model strived to connect three organisations responsible for delivering health and social care to older people: the regional hospital, primary health care and municipal eldercare. Methods: The actions of the actors involved in the process of implementing the model were understood to be shaped by the actors' understanding, commitment and ability. This article is based on 44 qualitative interviews performed on four occasions with 26 key actors at three organisational levels within these three organisations. Results and conclusions: The results point to the importance of paying regard to the different cultures of the organisations when implementing a new model. The role of upper management emerged as very important. Furthermore, to be accepted, the model has to be experienced as effectively dealing with real problems in the everyday practice of the actors in the organisations, from the bottom to the top.

  13. A SPICE model for a phase-change memory cell based on the analytical conductivity model

    Science.gov (United States)

    Yiqun, Wei; Xinnan, Lin; Yuchao, Jia; Xiaole, Cui; Jin, He; Xing, Zhang

    2012-11-01

    By way of periphery circuit design of the phase-change memory, it is necessary to present an accurate compact model of a phase-change memory cell for the circuit simulation. Compared with the present model, the model presented in this work includes an analytical conductivity model, which is deduced by means of the carrier transport theory instead of the fitting model based on the measurement. In addition, this model includes an analytical temperature model based on the 1D heat-transfer equation and the phase-transition dynamic model based on the JMA equation to simulate the phase-change process. The above models for phase-change memory are integrated by using Verilog-A language, and results show that this model is able to simulate the I-V characteristics and the programming characteristics accurately.

  14. Experimental results and numerical simulations for transonic flow over the ONERA M4R model

    Directory of Open Access Journals (Sweden)

    Marius Gabriel COJOCARU

    2013-06-01

    Full Text Available This paper presents a comparison between experimental results of transonic flow over the ONERA M4R calibration model obtained in the INCAS Trisonic wind tunnel and the numerical results. The first purpose, emphasized in this paper is to compare and validate the computational fluid dynamics (CFD techniques for internal transonic flows and to try to find the most suitable numerical methodology for these flows in both accuracy and computational resources. The second purpose is to develop a general method in experimental data correction and flight Reynolds extrapolation, using numerical simulations for both global and local pressure coefficients, as a replacement for the classical vortex lattices based method. That will be developed in a future paper. Besides the computational work, the periodic wind tunnel calibration is required as a quality insurance operation and a numerical model is developed such that future hardware modifications to be included and their impact to be properly considered.

  15. Solar Deployment System (SolarDS) Model: Documentation and Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Denholm, P.; Drury, E.; Margolis, R.

    2009-09-01

    The Solar Deployment System (SolarDS) model is a bottom-up, market penetration model that simulates the potential adoption of photovoltaics (PV) on residential and commercial rooftops in the continental United States through 2030. NREL developed SolarDS to examine the market competitiveness of PV based on regional solar resources, capital costs, electricity prices, utility rate structures, and federal and local incentives. The model uses the projected financial performance of PV systems to simulate PV adoption for building types and regions then aggregates adoption to state and national levels. The main components of SolarDS include a PV performance simulator, a PV annual revenue calculator, a PV financial performance calculator, a PV market share calculator, and a regional aggregator. The model simulates a variety of installed PV capacity for a range of user-specified input parameters. PV market penetration levels from 15 to 193 GW by 2030 were simulated in preliminary model runs. SolarDS results are primarily driven by three model assumptions: (1) future PV cost reductions, (2) the maximum PV market share assumed for systems with given financial performance, and (3) PV financing parameters and policy-driven assumptions, such as the possible future cost of carbon emissions.

  16. Workflow-Based Dynamic Enterprise Modeling

    Institute of Scientific and Technical Information of China (English)

    黄双喜; 范玉顺; 罗海滨; 林慧萍

    2002-01-01

    Traditional systems for enterprise modeling and business process control are often static and cannot adapt to the changing environment. This paper presents a workflow-based method to dynamically execute the enterprise model. This method gives an explicit representation of the business process logic and the relationships between the elements involved in the process. An execution-oriented integrated enterprise modeling system is proposed in combination with other enterprise views. The enterprise model can be established and executed dynamically in the actual environment due to the dynamic properties of the workflow model.

  17. Model Based Testing for Agent Systems

    Science.gov (United States)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  18. Model Based Control of Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Finn Sloth

    of the supermarket refrigeration systems therefore greatly relies on a human operator to detect and accommodate failures, and to optimize system performance under varying operational condition. Today these functions are maintained by monitoring centres located all over the world. Initiated by the growing need...... for automation of these procedures, that is to incorporate some "intelligence" in the control system, this project was started up. The main emphasis of this work has been on model based methods for system optimizing control in supermarket refrigeration systems. The idea of implementing a system optimizing.......e. by degrading the performance. The method has been successfully applied on a test frigeration system for minimization of the power consumption; the hereby gained experimental results will be presented. The present control structure in a supermarket refrigeration system is distributed, which means...

  19. Grey-theory based intrusion detection model

    Institute of Scientific and Technical Information of China (English)

    Qin Boping; Zhou Xianwei; Yang Jun; Song Cunyi

    2006-01-01

    To solve the problem that current intrusion detection model needs large-scale data in formulating the model in real-time use, an intrusion detection system model based on grey theory (GTIDS) is presented. Grey theory has merits of fewer requirements on original data scale, less limitation of the distribution pattern and simpler algorithm in modeling.With these merits GTIDS constructs model according to partial time sequence for rapid detect on intrusive act in secure system. In this detection model rate of false drop and false retrieval are effectively reduced through twice modeling and repeated detect on target data. Furthermore, GTIDS framework and specific process of modeling algorithm are presented. The affectivity of GTIDS is proved through emulated experiments comparing snort and next-generation intrusion detection expert system (NIDES) in SRI international.

  20. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  1. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  2. Component-based event composition modeling for CPS

    Science.gov (United States)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  3. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn, Ellen-Wien; Doldersum, Tom; Useya, Juliana; Augustijn, Denie

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V. cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  4. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn, Ellen-Wien; Doldersum, Tom; Useya, Juliana; Augustijn, Denie

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse d

  5. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  6. New global ICT-based business models

    DEFF Research Database (Denmark)

    House Case The Nano Solar Case The Master Cat Case The Pitfalls Of The Blue Ocean Strategy - Implications Of "The Six Paths Framework" Network-Based Innovation - Combining Exploration and Exploitation? Innovating New Business Models in Inter-firm Collaboration NEW Global Business Models - What Did...

  7. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  8. Rule-based Modelling and Tunable Resolution

    Directory of Open Access Journals (Sweden)

    Russ Harmer

    2009-11-01

    Full Text Available We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.

  9. Rule-based Modelling and Tunable Resolution

    CERN Document Server

    Harmer, Russ

    2009-01-01

    We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.

  10. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  11. Gradient based filtering of digital elevation models

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Andersen, Rune Carbuhn

    We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based on ...... in the landscape are washed out and misrepresented....

  12. Gradient based filtering of digital elevation models

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Andersen, Rune Carbuhn

    We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based...

  13. Model based sustainable production of biomethane

    OpenAIRE

    Biernacki, Piotr

    2014-01-01

    The main intention of this dissertation was to evaluate sustainable production of biomethane with use of mathematical modelling. To achieve this goal, widely acknowledged models like Anaerobic Digestion Model No.1 (ADM1), describing anaerobic digestion, and electrolyte Non-Random Two Liquid Model (eNRTL), for gas purification, were utilized. The experimental results, batch anaerobic digestion of different substrates and carbon dioxide solubility in 2-(Ethylamino)ethanol, were used to determin...

  14. V-SUIT Model Validation Using PLSS 1.0 Test Results

    Science.gov (United States)

    Olthoff, Claas

    2015-01-01

    The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination

  15. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  16. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    Science.gov (United States)

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  17. Cost Benefit Analysis Modeling Tool for Electric vs. ICE Airport Ground Support Equipment – Development and Results

    Energy Technology Data Exchange (ETDEWEB)

    James Francfort; Kevin Morrow; Dimitri Hochard

    2007-02-01

    This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.

  18. Multiagent-Based Model For ESCM

    OpenAIRE

    Delia MARINCAS

    2011-01-01

    Web based applications for Supply Chain Management (SCM) are now a necessity for every company in order to meet the increasing customer demands, to face the global competition and to make profit. Multiagent-based approach is appropriate for eSCM because it shows many of the characteristics a SCM system should have. For this reason, we have proposed a multiagent-based eSCM model which configures a virtual SC, automates the SC activities: selling, purchasing, manufacturing, planning, inventory,...

  19. New displaying models of bibliographic data and resources: cataloguing/resource description and search results

    Directory of Open Access Journals (Sweden)

    Antonella Trombone

    2014-07-01

    Full Text Available The logical model outlined in Functional Requirements for Bibliographic Records in 1998 has led to a theoretical reflection on the function of data and their organization into catalogues that hasn’t found stable effects in the representation of information yet. A consequence of the wide theoretical resonance of FRBR report was the review of regulatory codes and standards for electronic recording of bibliographic data. The first code that partly implements the FRBR model is the Italian one, published in 2009, the Italian cataloguing Rules: REICAT. The revision the Anglo-American cataloging rules has resulted in a new tool, based on the FRBR model and not set as a cataloging code: RDA. Resource Description and Access, released in 2010. To changing patterns of information models and contents’ media it has to add new information environment available to users, accustomed to using search engines as information retrieval tools, powerful and generalist.Today’s electronic catalogs are based on MARC formats for encoding of information, aimed at sharing and exchanging bibliographic records. However, the library data encoded in MARC exchange formats are invisible to search engines.Gradually, over the last few years, software modules devoted to cataloging have been differentiated from those for consultation, data visualization interfaces dedicated to users aimed to simplify the search mechanisms.One of the open issues relating to the new display systems concerns the selection and presentation of data. The sorting order is based on the criteria of relevance, which is based on scores that a software assigns to the record in relation to the weight or importance of the words entered in the search string.The new display systems of users ‘ searches, the discovery platforms that simultaneously query heterogeneous data bases for content and location, including also the OPACs, no longer use the languages of librarianship. The final display of search results

  20. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Nutaro, James J [ORNL

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.