WorldWideScience

Sample records for model results based

  1. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    Science.gov (United States)

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  2. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    Science.gov (United States)

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  3. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  4. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  5. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    Science.gov (United States)

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  6. Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling

    Science.gov (United States)

    Deng, F.; Chen, J.; Peters, W.; Krol, M.

    2008-12-01

    Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).

  7. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  8. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  9. Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model

    International Nuclear Information System (INIS)

    Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka

    2005-01-01

    The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the

  10. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  11. Pedestrian simulation model based on principles of bounded rationality: results of validation tests

    NARCIS (Netherlands)

    Zhu, W.; Timmermans, H.J.P.; Lo, H.P.; Leung, Stephen C.H.; Tan, Susanna M.L.

    2009-01-01

    Over the years, different modelling approaches to simulating pedestrian movement have been suggested. The majority of pedestrian decision models are based on the concept of utility maximization. To explore alternatives, we developed the heterogeneous heuristic model (HHM), based on principles of

  12. Deep Learning Based Solar Flare Forecasting Model. I. Results for Line-of-sight Magnetograms

    Science.gov (United States)

    Huang, Xin; Wang, Huaning; Xu, Long; Liu, Jinfu; Li, Rong; Dai, Xinghua

    2018-03-01

    Solar flares originate from the release of the energy stored in the magnetic field of solar active regions, the triggering mechanism for these flares, however, remains unknown. For this reason, the conventional solar flare forecast is essentially based on the statistic relationship between solar flares and measures extracted from observational data. In the current work, the deep learning method is applied to set up the solar flare forecasting model, in which forecasting patterns can be learned from line-of-sight magnetograms of solar active regions. In order to obtain a large amount of observational data to train the forecasting model and test its performance, a data set is created from line-of-sight magnetogarms of active regions observed by SOHO/MDI and SDO/HMI from 1996 April to 2015 October and corresponding soft X-ray solar flares observed by GOES. The testing results of the forecasting model indicate that (1) the forecasting patterns can be automatically reached with the MDI data and they can also be applied to the HMI data; furthermore, these forecasting patterns are robust to the noise in the observational data; (2) the performance of the deep learning forecasting model is not sensitive to the given forecasting periods (6, 12, 24, or 48 hr); (3) the performance of the proposed forecasting model is comparable to that of the state-of-the-art flare forecasting models, even if the duration of the total magnetograms continuously spans 19.5 years. Case analyses demonstrate that the deep learning based solar flare forecasting model pays attention to areas with the magnetic polarity-inversion line or the strong magnetic field in magnetograms of active regions.

  13. Vulnerability of hydropower generation to climate change in China: Results based on Grey forecasting model

    International Nuclear Information System (INIS)

    Wang, Bing; Liang, Xiao-Jie; Zhang, Hao; Wang, Lu; Wei, Yi-Ming

    2014-01-01

    This paper analyzes the long-term relationships between hydropower generation and climate factors (precipitation), hydropower generation capacity (installed capacity of hydropower station) to quantify the vulnerability of renewable energy production in China for the case of hydropower generation. Furthermore, this study applies Grey forecasting model to forecast precipitation in different provinces, and then sets up different scenarios for precipitation based on the IPCC Special Report on Emission Scenarios and results from PRECIS (Providing Regional Climate projections for Impacts Studies) model. The most important result found in this research is the increasing hydropower vulnerability of the poorest regions and the main hydropower generation provinces of China to climate change. Other main empirical results reveal that the impacts of climate change on the supply of hydropower generation in China will be noteworthy for the society. Different scenarios have different effects on hydropower generation, of which A2 scenario (pessimistic, high emission) has the largest. Meanwhile, the impacts of climate change on hydropower generation of every province are distinctly different, of which the Southwest part has the higher vulnerability than the average level while the central part lower. - Highlights: • The hydropower vulnerability will be enlarged with the rapid increase of hydropower capacity. • Modeling the vulnerability of hydropower in different scenarios and different provinces. • The increasing hydropower vulnerability of the poorest regions to climate change. • The increasing hydropower vulnerability of the main hydropower generation provinces. • Rainfall pattern caused by climate change would be the reason for the increasing vulnerability

  14. Encouraging Sustainable Transport Choices in American Households: Results from an Empirically Grounded Agent-Based Model

    Directory of Open Access Journals (Sweden)

    Davide Natalini

    2013-12-01

    Full Text Available The transport sector needs to go through an extended process of decarbonisation to counter the threat of climate change. Unfortunately, the International Energy Agency forecasts an enormous growth in the number of cars and greenhouse gas emissions by 2050. Two issues can thus be identified: (1 the need for a new methodology that could evaluate the policy performances ex-ante and (2 the need for more effective policies. To help address these issues, we developed an Agent-Based Model called Mobility USA aimed at: (1 testing whether this could be an effective approach in analysing ex-ante policy implementation in the transport sector; and (2 evaluating the effects of alternative policy scenarios on commuting behaviours in the USA. Particularly, we tested the effects of two sets of policies, namely market-based and preference-change ones. The model results suggest that this type of agent-based approach will provide a useful tool for testing policy interventions and their effectiveness.

  15. An Outcrop-based Detailed Geological Model to Test Automated Interpretation of Seismic Inversion Results

    NARCIS (Netherlands)

    Feng, R.; Sharma, S.; Luthi, S.M.; Gisolf, A.

    2015-01-01

    Previously, Tetyukhina et al. (2014) developed a geological and petrophysical model based on the Book Cliffs outcrops that contained eight lithotypes. For reservoir modelling purposes, this model is judged to be too coarse because in the same lithotype it contains reservoir and non-reservoir

  16. Effects of naloxone distribution to likely bystanders: Results of an agent-based model.

    Science.gov (United States)

    Keane, Christopher; Egan, James E; Hawk, Mary

    2018-05-01

    Opioid overdose deaths in the US rose dramatically in the past 16 years, creating an urgent national health crisis with no signs of immediate relief. In 2017, the President of the US officially declared the opioid epidemic to be a national emergency and called for additional resources to respond to the crisis. Distributing naloxone to community laypersons and people at high risk for opioid overdose can prevent overdose death, but optimal distribution methods have not yet been pinpointed. We conducted a sequential exploratory mixed methods design using qualitative data to inform an agent-based model to improve understanding of effective community-based naloxone distribution to laypersons to reverse opioid overdose. The individuals in the model were endowed with cognitive and behavioral variables and accessed naloxone via community sites such as pharmacies, hospitals, and urgent-care centers. We compared overdose deaths over a simulated 6-month period while varying the number of distribution sites (0, 1, and 10) and number of kits given to individuals per visit (1 versus 10). Specifically, we ran thirty simulations for each of thirteen distribution models and report average overdose deaths for each. The baseline comparator was no naloxone distribution. Our simulations explored the effects of distribution through syringe exchange sites with and without secondary distribution, which refers to distribution of naloxone kits by laypersons within their social networks and enables ten additional laypersons to administer naloxone to reverse opioid overdose. Our baseline model with no naloxone distribution predicted there would be 167.9 deaths in a six month period. A single distribution site, even with 10 kits picked up per visit, decreased overdose deaths by only 8.3% relative to baseline. However, adding secondary distribution through social networks to a single site resulted in 42.5% fewer overdose deaths relative to baseline. That is slightly higher than the 39

  17. Does folic acid supplementation prevent or promote colorectal cancer? Results from model-based predictions.

    Science.gov (United States)

    Luebeck, E Georg; Moolgavkar, Suresh H; Liu, Amy Y; Boynton, Alanna; Ulrich, Cornelia M

    2008-06-01

    Folate is essential for nucleotide synthesis, DNA replication, and methyl group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may progress under folic acid supplementation, consistent with the role of folate role in nucleotide synthesis and cell proliferation. To better understand the possible trade-offs between the protective effects due to decreased mutation rates and possibly concomitant detrimental effects due to increased cell proliferation of folic acid, we used a biologically based mathematical model of colorectal carcinogenesis. We predict changes in cancer risk based on timing of treatment start and the potential effect of folic acid on cell proliferation and mutation rates. Changes in colorectal cancer risk in response to folic acid supplementation are likely a complex function of treatment start, duration, and effect on cell proliferation and mutations rates. Predicted colorectal cancer incidence rates under supplementation are mostly higher than rates without folic acid supplementation unless supplementation is initiated early in life (before age 20 years). To the extent to which this model predicts reality, it indicates that the effect on cancer risk when starting folic acid supplementation late in life is small, yet mostly detrimental. Experimental studies are needed to provide direct evidence for this dual role of folate in colorectal cancer and to validate and improve the model predictions.

  18. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  19. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    Directory of Open Access Journals (Sweden)

    Christopher Heine

    2014-08-01

    Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.

  20. Experimental and modelling results of a parallel-plate based active magnetic regenerator

    DEFF Research Database (Denmark)

    Tura, A.; Nielsen, Kaspar Kirstein; Rowe, A.

    2012-01-01

    The performance of a permanent magnet magnetic refrigerator (PMMR) using gadolinium parallel plates is described. The configuration and operating parameters are described in detail. Experimental results are compared to simulations using an established twodimensional model of an active magnetic...

  1. Atmospheric greenhouse gases retrieved from SCIAMACHY: comparison to ground-based FTS measurements and model results

    Directory of Open Access Journals (Sweden)

    O. Schneising

    2012-02-01

    Full Text Available SCIAMACHY onboard ENVISAT (launched in 2002 enables the retrieval of global long-term column-averaged dry air mole fractions of the two most important anthropogenic greenhouse gases carbon dioxide and methane (denoted XCO2 and XCH4. In order to assess the quality of the greenhouse gas data obtained with the recently introduced v2 of the scientific retrieval algorithm WFM-DOAS, we present validations with ground-based Fourier Transform Spectrometer (FTS measurements and comparisons with model results at eight Total Carbon Column Observing Network (TCCON sites providing realistic error estimates of the satellite data. Such validation is a prerequisite to assess the suitability of data sets for their use in inverse modelling.

    It is shown that there are generally no significant differences between the carbon dioxide annual increases of SCIAMACHY and the assimilation system CarbonTracker (2.00 ± 0.16 ppm yr−1 compared to 1.94 ± 0.03 ppm yr−1 on global average. The XCO2 seasonal cycle amplitudes derived from SCIAMACHY are typically larger than those from TCCON which are in turn larger than those from CarbonTracker. The absolute values of the northern hemispheric TCCON seasonal cycle amplitudes are closer to SCIAMACHY than to CarbonTracker and the corresponding differences are not significant when compared with SCIAMACHY, whereas they can be significant for a subset of the analysed TCCON sites when compared with CarbonTracker. At Darwin we find discrepancies of the seasonal cycle derived from SCIAMACHY compared to the other data sets which can probably be ascribed to occurrences of undetected thin clouds. Based on the comparison with the reference data, we conclude that the carbon dioxide data set can be characterised by a regional relative precision (mean standard deviation of the differences of about 2.2 ppm and a relative accuracy (standard deviation of the mean differences

  2. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  3. Modelling Inter-Particle Forces and Resulting Agglomerate Sizes in Cement-Based Materials

    DEFF Research Database (Denmark)

    Kjeldsen, Ane Mette; Geiker, Mette Rica

    2005-01-01

    The theory of inter-particle forces versus external shear in cement-based materials is reviewed. On this basis, calculations on maximum agglomerate size present after the combined action of superplasticizers and shear are carried out. Qualitative experimental results indicate that external shear ...

  4. Combustion synthesis of TiB2-based cermets: modeling and experimental results

    International Nuclear Information System (INIS)

    Martinez Pacheco, M.; Bouma, R.H.B.; Katgerman, L.

    2008-01-01

    TiB 2 -based cermets are prepared by combustion synthesis followed by a pressing stage in a granulate medium. Products obtained by combustion synthesis are characterized by a large remaining porosity (typically 50%). To produce dense cermets, a subsequent densification step is performed after the combustion process and when the reacted material is still hot. To design the process, numerical simulations are carried out and compared to experimental results. In addition, physical and electrical properties of the products related to electrical contact applications are evaluated. (orig.)

  5. Modelling an exploited marine fish community with 15 parameters - results from a simple size-based model

    NARCIS (Netherlands)

    Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.

    2006-01-01

    To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a

  6. AN ANIMAL MODEL OF SCHIZOPHRENIA BASED ON CHRONIC LSD ADMINISTRATION: OLD IDEA, NEW RESULTS

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D.; Nichols, David E.

    2011-01-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as “clearly a paranoid state.” We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT2A receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. PMID:21352832

  7. An animal model of schizophrenia based on chronic LSD administration: old idea, new results.

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D; Nichols, David E

    2011-09-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as "clearly a paranoid state." We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT(2A) receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Spreading of intolerance under economic stress: Results from a reputation-based model

    Science.gov (United States)

    Martinez-Vaquero, Luis A.; Cuesta, José A.

    2014-08-01

    When a population is engaged in successive prisoner's dilemmas, indirect reciprocity through reputation fosters cooperation through the emergence of moral and action rules. A simplified model has recently been proposed where individuals choose between helping others or not and are judged good or bad for it by the rest of the population. The reputation so acquired will condition future actions. In this model, eight strategies (referred to as "leading eight") enforce a high level of cooperation, generate high payoffs, and are therefore resistant to invasions by other strategies. Here we show that, by assigning each individual one of two labels that peers can distinguish (e.g., political ideas, religion, and skin color) and allowing moral and action rules to depend on the label, intolerant behaviors can emerge within minorities under sufficient economic stress. We analyze the sets of conditions where this can happen and also discuss the circumstances under which tolerance can be restored. Our results agree with empirical observations that correlate intolerance and economic stress and predict a correlation between the degree of tolerance of a population and its composition and ethical stance.

  9. Comparison of rate theory based modeling calculations with the surveillance test results of Korean light water reactors

    International Nuclear Information System (INIS)

    Lee, Gyeong Geun; Lee, Yong Bok; Kim, Min Chul; Kwon, Junh Yun

    2012-01-01

    Neutron irradiation to reactor pressure vessel (RPV) steels causes a decrease in fracture toughness and an increase in yield strength while in service. It is generally accepted that the growth of point defect cluster (PDC) and copper rich precipitate (CRP) affects radiation hardening of RPV steels. A number of models have been proposed to account for the embrittlement of RPV steels. The rate theory based modeling mathematically described the evolution of radiation induced microstructures of ferritic steels under neutron irradiation. In this work, we compared the rate theory based modeling calculation with the surveillance test results of Korean Light Water Reactors (LWRs)

  10. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    OpenAIRE

    Christopher Heine; Markus Plagemann

    2014-01-01

    A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are u...

  11. Experimental checking results of mathematical modeling of the radiation environment sensor based on diamond detectors

    International Nuclear Information System (INIS)

    Gladchenkov, E V; Kolyubin, V A; Nedosekin, P G; Zaharchenko, K V; Ibragimov, R F; Kadilin, V V; Tyurin, E M

    2017-01-01

    Were conducted a series of experiments, the purpose of which had to verify the mathematical model of the radiation environment sensor. Theoretical values of the beta particles count rate from 90 Sr - 90 Y source registered by radiation environment sensor was compared with the experimental one. Theoretical (calculated) count rate of beta particles was found with using the developed mathematical model of the radiation environment sensor. Deviation of the calculated values of the beta particle count rate does not exceed 10% from the experimental. (paper)

  12. Model based monitoring of urban traffic noise : Field test results for road side and shielded sides

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Lutgendorf, D.; Wessels, P.W.; Basten, T.G.H.

    2012-01-01

    Urban traffic noise can be a major issue for people and (local) governments. On a local scale the use of measurements is increasing, especially when measures or changes to the local infrastructure are proposed. However, measuring (only) urban traffic noise is a challenging task. By using a model

  13. Does folic-acid supplementation prevent or promote colorectal cancer? Results from model-based predictions

    OpenAIRE

    Luebeck, EG; Moolgavkar, SH; Liu, AY; Boynton, A; Ulrich, CM

    2008-01-01

    Folate is essential for nucleotide synthesis, DNA-replication and methyl-group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid (FA) to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may...

  14. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    Science.gov (United States)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  15. An Investigation Of The Influence Of Leadership And Processes On Basic Performance Results Using A Decision Model Based On Efqm

    Directory of Open Access Journals (Sweden)

    Ahmet Talat İnan

    2013-06-01

    Full Text Available EFQM Excellence Model is a quality approach that companies benefit in achieving success. EFQM Excellence Model is an assessment tool helping to determine what is competence and missing aspects in achieving excellence.In this study, based on the EFQM Excellence Model, the influence of basic performance results caused by leadership and processes variables in this model of a firm engaged in maintenance and repair services due to a large-scale company. In this work, a survey was conducted that covering the company's employees and managers. The data obtained from this survey was utilized by using SPSS16.0 statistics software in respect of factor analysis, reliability analysis, correlation and regression analysis. The relation between variables was evaluated taking into account the resuşts of analysis.

  16. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    Science.gov (United States)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  17. SAT-MAP-CLIMATE project results[SATellite base bio-geophysical parameter MAPping and aggregation modelling for CLIMATE models

    Energy Technology Data Exchange (ETDEWEB)

    Bay Hasager, C.; Woetmann Nielsen, N.; Soegaard, H.; Boegh, E.; Hesselbjerg Christensen, J.; Jensen, N.O.; Schultz Rasmussen, M.; Astrup, P.; Dellwik, E.

    2002-08-01

    Earth Observation (EO) data from imaging satellites are analysed with respect to albedo, land and sea surface temperatures, land cover types and vegetation parameters such as the Normalized Difference Vegetation Index (NDVI) and the leaf area index (LAI). The observed parameters are used in the DMI-HIRLAM-D05 weather prediction model in order to improve the forecasting. The effect of introducing actual sea surface temperatures from NOAA AVHHR compared to climatological mean values, shows a more pronounced land-sea breeze effect which is also observable in field observations. The albedo maps from NOAA AVHRR are rather similar to the climatological mean values so for the HIRLAM model this is insignicant, yet most likely of some importance in the HIRHAM regional climate model. Land cover type maps are assigned local roughness values determined from meteorological field observations. Only maps with a spatial resolution around 25 m can adequately map the roughness variations of the typical patch size distribution in Denmark. A roughness map covering Denmark is aggregated (ie area-average non-linearly) by a microscale aggregation model that takes the non-linear turbulent responses of each roughness step change between patches in an arbitrary pattern into account. The effective roughnesses are calculated into a 15 km by 15 km grid for the HIRLAM model. The effect of hedgerows is included as an added roughness effect as a function of hedge density mapped from a digital vector map. Introducing the new effective roughness maps into the HIRLAM model appears to remedy on the seasonal wind speed bias over land and sea in spring. A new parameterisation on the effective roughness for scalar surface fluxes is developed and tested on synthetic data. Further is a method for the estimation the evapotranspiration from albedo, surface temperatures and NDVI succesfully compared to field observations. The HIRLAM predictions of water vapour at 12 GMT are used for atmospheric correction of

  18. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  19. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  20. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    Science.gov (United States)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  1. Implications of the Abolition of Milk Quota System for Polish Agriculture – Simulation Results Based on the AG MEMOD Model

    Directory of Open Access Journals (Sweden)

    Mariusz Hamulczuk

    2009-09-01

    Full Text Available The objective of the study was to asses the economics effects of the dairy policy reform sanctioned by CAP Health Check on the agricultural market in Poland. The paper presents a theoretical study of the production control program as well as a model based quantitative analysis of the implications of the reform on the agricultural markets. The partial equilibrium model AGMEMOD was used for simulation. The results obtained indicate that the expansion and subsequently the elimination of milk quota system lead to the growth of milk production and consumption in Poland which confirms the hypothesis derived from theoretical study. As a consequence, the growth of the production of most of dairy products and the decrease of their prices is expected. As the growth of dairy consumption is smaller than the growth of milk production the increase of self-sufficiency in the dairy market is predicted. The comparison of the scale of price adjustment resulting from the dairy reform to the market price changes observed recently leads to the conclusion that global market factors will probably be more important for the future development of milk production and prices in Poland than the milk quota abolition. Nevertheless, the reform constitutes a significant change in business conditions for producers and consumers of milk and dairy products. As a consequence, milk production will become more market based, as far as market prices, production costs and milk yields are concerned. Simulation results from the AGMEMOD model confirm the opinion brought by other authors that the abolition of milk quotas will lead to the decline of dairy farmer income. The main beneficiaries of the reform would become the consumers who could take advantage of the decline in prices of the dairy products.

  2. Effects of Problem-Based Learning Model versus Expository Model and Motivation to Achieve for Student's Physic Learning Result of Senior High School at Class XI

    Science.gov (United States)

    Prayekti

    2016-01-01

    "Problem-based learning" (PBL) is one of an innovative learning model which can provide an active learning to student, include the motivation to achieve showed by student when the learning is in progress. This research is aimed to know: (1) differences of physic learning result for student group which taught by PBL versus expository…

  3. Using Evidence Based Practice in LIS Education: Results of a Test of a Communities of Practice Model

    Directory of Open Access Journals (Sweden)

    Joyce Yukawa

    2010-03-01

    Full Text Available Objective ‐ This study investigated the use of a communities of practice (CoP model for blended learning in library and information science (LIS graduate courses. The purposes were to: (1 test the model’s efficacy in supporting student growth related to core LIS concepts, practices, professional identity, and leadership skills, and (2 develop methods for formative and summative assessment using the model.Methods ‐ Using design‐based research principles to guide the formative and summative assessments, pre‐, mid‐, and post‐course questionnaires were constructed to test the model and administered to students in three LIS courses taught by the author. Participation was voluntary and anonymous. A total of 34 students completed the three courses; response rate for the questionnaires ranged from 47% to 95%. The pre‐course questionnaire addressed attitudes toward technology and the use of technology for learning. The mid‐course questionnaire addressed strengths and weaknesses of the course and suggestions for improvement. The post‐course questionnaire addressed what students valued about their learning and any changes in attitude toward technology for learning. Data were analyzed on three levels. Micro‐level analysis addressed technological factors related to usability and participant skills and attitudes. Meso‐level analysis addressed social and pedagogical factors influencing community learning. Macro‐level analysis addressed CoP learning outcomes, namely, knowledge of core concepts and practices, and the development of professional identity and leadership skills.Results ‐ The students can be characterized as adult learners who were neither early nor late adopters of technology. At the micro‐level, responses indicate that the online tools met high standards of usability and effectively supported online communication and learning. Moreover, the increase in positive attitudes toward the use of technology for learning at

  4. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Science.gov (United States)

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  5. Developing confidence in a coupled TH model based on the results of experiment by using engineering scale test facility, 'COUPLE'

    International Nuclear Information System (INIS)

    Fujisaki, Kiyoshi; Suzuki, Hideaki; Fujita, Tomoo

    2008-03-01

    It is necessary to understand quantitative changes of near-field conditions and processes over time and space for modeling the near-field evolution after emplacement of engineered barriers. However, the coupled phenomena in near-field are complicated because thermo-, hydro-, mechanical, chemical processes will interact each other. The question is, therefore, whether the applied model will represent the coupled behavior adequately or not. In order to develop confidence in the modeling, it is necessary to compare with results of coupled behavior experiments in laboratory or in site. In this report, we evaluated the applicability of a coupled T-H model under the conditions of simulated near-field for the results of coupled T-H experiment in laboratory. As a result, it has been shown that the fitting by the modeling with the measured data is reasonable under this condition. (author)

  6. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  7. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    Directory of Open Access Journals (Sweden)

    Morris Denise

    2007-09-01

    Full Text Available Abstract Background The first step of handling health promotion (HP in Diagnosis Related Groups (DRGs is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances. A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists of two parts; first part includes motivational counselling (7 codes and the second part comprehends intervention, rehabilitation and after treatment (8 codes. The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic registration of clinical HP procedures in day life. Methods The multi centre project was carried out in 19 departments/hospitals in 6 countries in a clinical setup. The study consisted of three parts in accordance with the objectives. A: Individual test. 20 consecutive medical records from each participating department/hospital were coded by the (coding specialists at local department/hospital, exclusively (n = 5,529 of 5,700 possible tests in total. B: Common test. 14 standardized medical records were coded by all the specialists from 17 departments/hospitals, who returned 3,046 of 3,570 tests. C: Specialist evaluation. The specialists from the 19 departments/hospitals evaluated if the codes were useful, applicable and sufficient for the registration in their own department/hospital (239 of 285. Results A: In 97 to100% of the local patient pathways the specialists were able to evaluate if there was documentation of HP activities in the medical record to be coded. B: Inter rater reliability on the use of the codes were 93% (57 to 100% and 71% (31

  8. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  9. The ITER magnets: Preparation for full size construction based on the results of the model coil programme

    International Nuclear Information System (INIS)

    Huguet, M.

    2003-01-01

    The ITER magnets are long-lead time items and the preparation of their construction is the subject of a major and coordinated effort of the ITER International Team and Participant Teams. The results of the ITER model coil programme constitute the basis and the main source of data for the preparation of the technical specifications for the procurement of the ITER magnets. A review of the salient results of the ITER model coil programme is given and the significance of these results for the preparation of full size industrial production is explained. The model coil programme has confirmed the validity of the design and the manufacturer's ability to produce the coils with the required quality level. The programme has also allowed the optimisation of the conductor design and the identification of further development which would lead to cost reductions of the toroidal field coil case. (author)

  10. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

    2009-11-30

    Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

  11. On-line monitoring and modelling based process control of high rate nitrification - lab scale experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Pirsing, A. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Wiesmann, U. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Kelterbach, G. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Schaffranietz, U. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Roeck, H. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Eichner, B. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Szukal, S. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Schulze, G. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie

    1996-09-01

    This paper presents a new concept for the control of nitrification in highly polluted waste waters. The approach is based on mathematical modelling. To determine the substrate degradation rates of the microorganisms involved, a mathematical model using gas measurement is used. A fuzzy-controller maximises the capacity utilisation efficiencies. The experiments carried out in a lab-scale reactor demonstrate that even with highly varying ammonia concentrations in the influent, the nitrogen concentrations in the effluent can be kept within legal limits. (orig.). With 11 figs.

  12. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    Science.gov (United States)

    Bordogna, Clelia María; Albano, Ezequiel V.

    2007-02-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.

  13. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    International Nuclear Information System (INIS)

    Bordogna, Clelia Maria; Albano, Ezequiel V

    2007-01-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work

  14. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  15. Channels Coordination Game Model Based on Result Fairness Preference and Reciprocal Fairness Preference: A Behavior Game Forecasting and Analysis Method

    OpenAIRE

    Ding, Chuan; Wang, Kaihong; Huang, Xiaoying

    2014-01-01

    In a distribution channel, channel members are not always self-interested, but altruistic in some conditions. Based on this assumption, this paper adopts a behavior game method to analyze and forecast channel members’ decision behavior based on result fairness preference and reciprocal fairness preference by embedding a fair preference theory in channel research of coordination. The behavior game forecasts that a channel can achieve coordination if channel members consider behavior elemen...

  16. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  17. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Directory of Open Access Journals (Sweden)

    Alvaro G Gutiérrez

    Full Text Available Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S. The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area. We compared the responses of a young stand (YS, ca. 60 years-old and an old-growth forest (OG, >500 years-old in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  18. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Science.gov (United States)

    Gutiérrez, Alvaro G; Armesto, Juan J; Díaz, M Francisca; Huth, Andreas

    2014-01-01

    Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  19. Spatial organization of mesenchymal stem cells in vitro--results from a new individual cell-based model with podia.

    Directory of Open Access Journals (Sweden)

    Martin Hoffmann

    Full Text Available Therapeutic application of mesenchymal stem cells (MSC requires their extensive in vitro expansion. MSC in culture typically grow to confluence within a few weeks. They show spindle-shaped fibroblastoid morphology and align to each other in characteristic spatial patterns at high cell density. We present an individual cell-based model (IBM that is able to quantitatively describe the spatio-temporal organization of MSC in culture. Our model substantially improves on previous models by explicitly representing cell podia and their dynamics. It employs podia-generated forces for cell movement and adjusts cell behavior in response to cell density. At the same time, it is simple enough to simulate thousands of cells with reasonable computational effort. Experimental sheep MSC cultures were monitored under standard conditions. Automated image analysis was used to determine the location and orientation of individual cells. Our simulations quantitatively reproduced the observed growth dynamics and cell-cell alignment assuming cell density-dependent proliferation, migration, and morphology. In addition to cell growth on plain substrates our model captured cell alignment on micro-structured surfaces. We propose a specific surface micro-structure that according to our simulations can substantially enlarge cell culture harvest. The 'tool box' of cell migratory behavior newly introduced in this study significantly enhances the bandwidth of IBM. Our approach is capable of accommodating individual cell behavior and collective cell dynamics of a variety of cell types and tissues in computational systems biology.

  20. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  1. Three-dimensional model of plate geometry and velocity model for Nankai Trough seismogenic zone based on results from structural studies

    Science.gov (United States)

    Nakanishi, A.; Shimomura, N.; Kodaira, S.; Obana, K.; Takahashi, T.; Yamamoto, Y.; Yamashita, M.; Takahashi, N.; Kaneda, Y.

    2012-12-01

    In the Nankai Trough subduction seismogenic zone, the Nankai and Tonankai earthquakes had often occurred simultaneously, and caused a great event. In order to reduce a great deal of damage to coastal area from both strong ground motion and tsunami generation, it is necessary to understand rupture synchronization and segmentation of the Nankai megathrust earthquake. For a precise estimate of the rupture zone of the Nankai megathrust event based on the knowledge of realistic earthquake cycle and variation of magnitude, it is important to know the geometry and property of the plate boundary of the subduction seismogenic zone. To improve a physical model of the Nankai Trough seismogenic zone, the large-scale high-resolution wide-angle and reflection (MCS) seismic study, and long-term observation has been conducted since 2008. Marine active source seismic data have been acquired along grid two-dimensional profiles having the total length of ~800km every year. A three-dimensional seismic tomography using active and passive seismic data observed both land and ocean bottom stations have been also performed. From those data, we found that several strong lateral variations of the subducting Philippine Sea plate and overriding plate corresponding to margins of coseismic rupture zone of historical large event occurred along the Nankai Trough. Particularly a possible prominent reflector for the forearc Moho is recently imaged in the offshore side in the Kii channel at the depth of ~18km which is shallower than those of other area along the Nankai Trough. Such a drastic variation of the overriding plate might be related to the existence of the segmentation of the Nankai megathrust earthquake. Based on our results derived from seismic studies, we have tried to make a geometrical model of the Philippine Sea plate and a three-dimensional velocity structure model of the Nankai Trough seismogenic zone. In this presentation, we will summarize major results of out seismic studies, and

  2. Preliminary results of sup(40)Ca(e,e'c) reaction analysis c p,α, based on statistical model

    International Nuclear Information System (INIS)

    Herdade, S.B.; Emrich, H.J.

    1990-01-01

    Statistical model calculations relative to the reactions sup(40)Ca (e,e'p) sup(39)K and sup(40)Ca(e,e'P sub(o)) sup(39)K sup(gs), using a modified version of the program STAPRE are compared with experimental results obtained from coincidence experiments carried out at the Mainz microtron MAMI A. Preliminary results indicate that the statistical decay of a 1 sup(-) level in the sup(40)Ca compound nucleus, at an excitation energy + 20 MeV, to the ground state of the sup(39)K residual nucleus is only about 15% of the total decay, indicating that direct and/or semi-direct mechanisms contribute to the major part of the decay. (author)

  3. Tundra shrubification and tree-line advance amplify arctic climate warming: results from an individual-based dynamic vegetation model

    Science.gov (United States)

    Zhang, Wenxin; Miller, Paul A.; Smith, Benjamin; Wania, Rita; Koenigk, Torben; Döscher, Ralf

    2013-09-01

    One major challenge to the improvement of regional climate scenarios for the northern high latitudes is to understand land surface feedbacks associated with vegetation shifts and ecosystem biogeochemical cycling. We employed a customized, Arctic version of the individual-based dynamic vegetation model LPJ-GUESS to simulate the dynamics of upland and wetland ecosystems under a regional climate model-downscaled future climate projection for the Arctic and Subarctic. The simulated vegetation distribution (1961-1990) agreed well with a composite map of actual arctic vegetation. In the future (2051-2080), a poleward advance of the forest-tundra boundary, an expansion of tall shrub tundra, and a dominance shift from deciduous to evergreen boreal conifer forest over northern Eurasia were simulated. Ecosystems continued to sink carbon for the next few decades, although the size of these sinks diminished by the late 21st century. Hot spots of increased CH4 emission were identified in the peatlands near Hudson Bay and western Siberia. In terms of their net impact on regional climate forcing, positive feedbacks associated with the negative effects of tree-line, shrub cover and forest phenology changes on snow-season albedo, as well as the larger sources of CH4, may potentially dominate over negative feedbacks due to increased carbon sequestration and increased latent heat flux.

  4. Exergy analysis of an industrial unit of catalyst regeneration based on the results of modeling and simulation

    International Nuclear Information System (INIS)

    Toghyani, Mahboubeh; Rahimi, Amir

    2015-01-01

    An industrial process is synthesized and developed for decoking of de-hydrogenation catalyst, used in LAB (Linear Alkyl Benzene) production. A multi-tube fixed bed reactor, with short length tubes is designed for decoking of catalyst as the main equipment of the process. This study provides a microscopic exergy analysis for decoking reactor and a macroscopic exergy analysis for synthesized regeneration process. The dynamic mathematical modeling technique and the simulation of process by a commercial software are applied simultaneously. The used model was previously developed for performance analysis of decoking reactor. An appropriate exergy model is developed and adopted to estimate the enthalpy, exergetic efficiency and irreversibility. The model is validated with respect to some operating data measured in a commercial regeneration unit for variations in gas and particle characteristics along the reactor. In coke-combustion period, in spite of high reaction rate, the reactor has low exergetic efficiency due to entropy production during heat and mass transfer processes. The effects of inlet gas flow rate, temperature and oxygen concentration are investigated on the exergetic efficiency and irreversibilities. Macroscopic results indicate that the fan has the highest irreversibilities among the other equipment. Applying proper operating variables reduces the cycle irreversibilities at least by 20%. - Highlights: • A microscopic exergy analysis for a multi-tube fixed bed reactor is conducted. • Controlling the O_2 concentration upgrades the reactor exergetic performance. • A macroscopic exergy analysis for synthesized regeneration process is conducted. • The fan is one of the main sources of the regeneration cycle irreversibility. • The proposed strategies can reduce the cycle irreversibilities at least by 20%.

  5. Benefits of using customized instrumentation in total knee arthroplasty: results from an activity-based costing model.

    Science.gov (United States)

    Tibesku, Carsten O; Hofer, Pamela; Portegies, Wesley; Ruys, C J M; Fennema, Peter

    2013-03-01

    The growing demand for total knee arthroplasty (TKA) associated with the efforts to contain healthcare expenditure by advanced economies necessitates the use of economically effective technologies in TKA. The present analysis based on activity-based costing (ABC) model was carried out to estimate the economic value of patient-matched instrumentation (PMI) compared to standard surgical instrumentation in TKA. The costs of the two approaches, PMI and standard instrumentation in TKA, were determined by the use of ABC which measures the cost of a particular procedure by determining the activities involved and adding the cost of each activity. Improvement in productivity due to increased operating room (OR) turn-around times was determined and potential additional revenue to the hospital by the efficient utilization of gained OR time was estimated. Increased efficiency in the usage of OR and utilization of surgical trays were noted with patient-specific approach. Potential revenues to the hospital were estimated with the use of PMI by efficient utilization of time saved in OR. Additional revenues of 78,240 per year were estimated considering utilization of gained OR time to perform surgeries other than TKA. The analysis suggests that use of PMI in TKA is economically effective when compared to standard instrumentation.

  6. DESIGN OF LOW CYTOTOXICITY DIARYLANILINE DERIVATIVES BASED ON QSAR RESULTS: AN APPLICATION OF ARTIFICIAL NEURAL NETWORK MODELLING

    Directory of Open Access Journals (Sweden)

    Ihsanul Arief

    2016-11-01

    Full Text Available Study on cytotoxicity of diarylaniline derivatives by using quantitative structure-activity relationship (QSAR has been done. The structures and cytotoxicities of  diarylaniline derivatives were obtained from the literature. Calculation of molecular and electronic parameters was conducted using Austin Model 1 (AM1, Parameterized Model 3 (PM3, Hartree-Fock (HF, and density functional theory (DFT methods.  Artificial neural networks (ANN analysis used to produce the best equation with configuration of input data-hidden node-output data = 5-8-1, value of r2 = 0.913; PRESS = 0.069. The best equation used to design and predict new diarylaniline derivatives.  The result shows that compound N1-(4′-Cyanophenyl-5-(4″-cyanovinyl-2″,6″-dimethyl-phenoxy-4-dimethylether benzene-1,2-diamine is the best-proposed compound with cytotoxicity value (CC50 of 93.037 μM.

  7. Blast-cooling of beef-in-sauce catering meals: numerical results based on a dynamic zero-order model

    Directory of Open Access Journals (Sweden)

    Jose A. Rabi

    2014-10-01

    Full Text Available Beef-in-sauce catering meals under blast-cooling have been investigated in a research project which aims at quantitative HACCP (hazard analysis critical control point. In view of its prospective coupling to a predictive microbiology model proposed in the project, zero-order spatial dependence has proved to suitably predict meal temperatures in response to temperature variations in the cooling air. This approach has modelled heat transfer rates via the a priori unknown convective coefficient hc which is allowed to vary due to uncertainty and variability in the actual modus operandi of the chosen case study hospital kitchen. Implemented in MS Excel®, the numerical procedure has successfully combined the 4th order Runge-Kutta method, to solve the governing equation, with non-linear optimization, via the built-in Solver, to determine the coefficient hc. In this work, the coefficient hc was assessed for 119 distinct recently-cooked meal samples whose temperature-time profiles were recorded in situ after 17 technical visits to the hospital kitchen over a year. The average value and standard deviation results were hc = 12.0 ± 4.1 W m-2 K-1, whilst the lowest values (associated with the worst cooling scenarios were about hc » 6.0 W m-2 K-1.

  8. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  9. Result-Based Public Governance

    DEFF Research Database (Denmark)

    Boll, Karen

    Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution-produced ef......Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution......-produced effects on individuals or businesses in society; effects which are often produced by ‘nudging’ the citizenry in a certain direction. With point of departure in these two governance-systems, the paper explores a case of controversial inspection of businesses’ negative VAT accounts and it describes...... explores how and why this state of affairs appears and problematizes the widespread use of result-based governance and nudging-techniques by public sector institutions....

  10. Tundra shrubification and tree-line advance amplify arctic climate warming: results from an individual-based dynamic vegetation model

    International Nuclear Information System (INIS)

    Zhang Wenxin; Miller, Paul A; Smith, Benjamin; Wania, Rita; Koenigk, Torben; Döscher, Ralf

    2013-01-01

    One major challenge to the improvement of regional climate scenarios for the northern high latitudes is to understand land surface feedbacks associated with vegetation shifts and ecosystem biogeochemical cycling. We employed a customized, Arctic version of the individual-based dynamic vegetation model LPJ-GUESS to simulate the dynamics of upland and wetland ecosystems under a regional climate model–downscaled future climate projection for the Arctic and Subarctic. The simulated vegetation distribution (1961–1990) agreed well with a composite map of actual arctic vegetation. In the future (2051–2080), a poleward advance of the forest–tundra boundary, an expansion of tall shrub tundra, and a dominance shift from deciduous to evergreen boreal conifer forest over northern Eurasia were simulated. Ecosystems continued to sink carbon for the next few decades, although the size of these sinks diminished by the late 21st century. Hot spots of increased CH 4 emission were identified in the peatlands near Hudson Bay and western Siberia. In terms of their net impact on regional climate forcing, positive feedbacks associated with the negative effects of tree-line, shrub cover and forest phenology changes on snow-season albedo, as well as the larger sources of CH 4 , may potentially dominate over negative feedbacks due to increased carbon sequestration and increased latent heat flux. (letter)

  11. Assessment of offshore wind power potential in the Aegean and Ionian Seas based on high-resolution hindcast model results

    Directory of Open Access Journals (Sweden)

    Takvor Soukissian

    2017-03-01

    Full Text Available In this study long-term wind data obtained from high-resolution hindcast simulations is used to analytically assess offshore wind power potential in the Aegean and Ionian Seas and provide wind climate and wind power potential characteristics at selected locations, where offshore wind farms are at the concept/planning phase. After ensuring the good model performance through detailed validation against buoy measurements, offshore wind speed and wind direction at 10 m above sea level are statistically analyzed on the annual and seasonal time scale. The spatial distribution of the mean wind speed and wind direction are provided in the appropriate time scales, along with the mean annual and the inter-annual variability; these statistical quantities are useful in the offshore wind energy sector as regards the preliminary identification of favorable sites for exploitation of offshore wind energy. Moreover, the offshore wind power potential and its variability are also estimated at 80 m height above sea level. The obtained results reveal that there are specific areas in the central and the eastern Aegean Sea that combine intense annual winds with low variability; the annual offshore wind power potential in these areas reach values close to 900 W/m2, suggesting that a detailed assessment of offshore wind energy would be worth noticing and could lead in attractive investments. Furthermore, as a rough estimate of the availability factor, the equiprobable contours of the event [4 m/s ≤ wind speed ≤ 25 m/s] are also estimated and presented. The selected lower and upper bounds of wind speed correspond to typical cut-in and cut-out wind speed thresholds, respectively, for commercial offshore wind turbines. Finally, for seven offshore wind farms that are at the concept/planning phase the main wind climate and wind power density characteristics are also provided.

  12. Results from software based empirical models of and standing biomass for poplar and willow grown as short rotation coppice

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, M.E.; Morgan, G.W.; Brewer, A.C. (Forest Research Biometrics, Surveys and Statistics Division, Forest Research, Wrecclesham (United Kingdom))

    2007-07-01

    Statistical analysis was used to create a model for estimating the quantity of biomass produced by crops of poplar and willow grown as short rotation coppice. This model was converted into a software system as described here. The software is currently available for scientific demonstration. (orig.)

  13. Comparison of results from dispersion models for regulatory purposes based on Gaussian-and Lagrangian-algorithms: an evaluating literature study

    International Nuclear Information System (INIS)

    Walter, H.

    2004-01-01

    Powerful tools to describe atmospheric transport processes for radiation protection can be provided by meteorology; these are atmospheric flow and dispersion models. Concerning dispersion models, Gaussian plume models have been used since a long time to describe atmospheric dispersion processes. Advantages of the Gaussian plume models are short computation time, good validation and broad acceptance worldwide. However, some limitations and their implications on model result interpretation have to be taken into account, as the mathematical derivation of an analytic solution of the equations of motion leads to severe constraints. In order to minimise these constraints, various dispersion models for scientific and regulatory purposes have been developed and applied. Among these the Lagrangian particle models are of special interest, because these models are able to simulate atmospheric transport processes close to reality, e.g. the influence of orography, topography, wind shear and other meteorological phenomena. Within this study, the characteristics and computational results of Gaussian dispersion models as well as of Lagrangian models have been compared and evaluated on the base of numerous papers and reports published in literature. Special emphasis has been laid on the intention that dispersion models should comply with EU requests (Richtlinie 96/29/Euratom, 1996) on a more realistic assessment of the radiation exposure to the population. (orig.)

  14. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances.A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  15. Trajectories of Heroin Addiction: Growth Mixture Modeling Results Based on a 33-Year Follow-Up Study

    Science.gov (United States)

    Hser, Yih-Ing; Huang, David; Chou, Chih-Ping; Anglin, M. Douglas

    2007-01-01

    This study investigates trajectories of heroin use and subsequent consequences in a sample of 471 male heroin addicts who were admitted to the California Civil Addict Program in 1964-1965 and followed over 33 years. Applying a two-part growth mixture modeling strategy to heroin use level during the first 16 years of the addiction careers since…

  16. Profile control simulations and experiments on TCV : A controller test environment and results using a model-based predictive controller

    NARCIS (Netherlands)

    Maljaars, E.; Felici, F.; Blanken, T.C.; Galperti, C.; Sauter, O.; de Baar, M.R.; Carpanese, F.; Goodman, T.P.; Kim, D.; Kim, S.H.; Kong, M.G.; Mavkov, B.; Merle, A.; Moret, J.M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.A.; Vu, N.M.T.

    2017-01-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety

  17. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    NARCIS (Netherlands)

    Maljaars, B.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.; Vu, T.

    2017-01-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety

  18. Instantaneous emission modeling with GPS-based vehicle activity data: results of diesel trucks for one-day trips

    NARCIS (Netherlands)

    Feng, T.; Arentze, T.A.; Timmermans, H.J.P.

    2011-01-01

    This paper presents an instantaneous analysis for traffic emissions using GPS-based vehicle activity data. The different driving conditions, including real-time and average speed, short-time stops and long-time stops, acceleration and deceleration, etc., are extracted from GPS data. The hot

  19. Results and Lessons Learned from a Coupled Social and Physical Hydrology Model: Testing Alternative Water Management Policies and Institutional Structures Using Agent-Based Modeling and Regional Hydrology

    Science.gov (United States)

    Murphy, J.; Lammers, R. B.; Prousevitch, A.; Ozik, J.; Altaweel, M.; Collier, N. T.; Kliskey, A. D.; Alessa, L.

    2015-12-01

    Water Management in the U.S. Southwest is under increasing scrutiny as many areas endure persistent drought. The impact of these prolonged dry conditions is a product of regional climate and hydrological conditions, but also of a highly engineered water management infrastructure and a complex web of social arrangements whereby water is allocated, shared, exchanged, used, re-used, and finally consumed. We coupled an agent-based model with a regional hydrological model to understand the dynamics in one richly studied and highly populous area: southern Arizona, U.S.A., including metropolitan Phoenix and Tucson. There, multiple management entities representing an array of municipalities and other water providers and customers, including private companies and Native American tribes are enmeshed in a complex legal and economic context in which water is bought, leased, banked, and exchanged in a variety of ways and on multiple temporal and physical scales. A recurrent question in the literature of adaptive management is the impact of management structure on overall system performance. To explore this, we constructed an agent-based model to capture this social complexity, and coupled this with a physical hydrological model that we used to drive the system under a variety of water stress scenarios and to assess the regional impact of the social system's performance. We report the outcomes of ensembles of runs in which varieties of alternative policy constraints and management strategies are considered. We hope to contribute to policy discussions in this area and connected and legislatively similar areas (such as California) as current conditions change and existing legal and policy structures are revised. Additionally, we comment on the challenges of integrating models that ostensibly are in different domains (physical and social) but that independently represent a system in which physical processes and human actions are closely intertwined and difficult to disentangle.

  20. Effects of earthquakes on the deep repository for spent fuel in Sweden based on case studies and preliminary model results

    International Nuclear Information System (INIS)

    Baeckblom, Goeran; Munier, Raymond

    2002-06-01

    their original values within a few months. The density of the buffer around the canister is high enough to prevent liquefaction due to shaking. The predominant brittle deformation of a rock mass will be reactivation of pre- existing fractures. The ata emanating from faults intersecting tunnels show that creation of new fractures is confined to the immediate vicinity of the reactivated faults and that deformation in host rock is rapidly decreasing with the distance from the fault. By selection of appropriate respect distances the probability of canister damage due to faulting is further lowered. Data from deep South African mines show that rocks in a environment with non- existing faults and with low fracture densities and high stresses might generate faults in a previously unfractured rock mass. The Swedish repository will be located in fractured bedrock, at intermediate depth, 400 - 700 m, where stresses are moderate. The conditions to create these peculiar mining-induced features will not prevail in the repository environment. Should these faults anyhow be created, the canister is designed to withstand a shear deformation of at least 0.1 m. This corresponds to a magnitude 6 earthquake along the fault with a length of at least 1 km which is highly unlikely. Respect distance has to be site and fault specific. Field evidence gathered in this study indicates that respect distances may be considerably smaller (tens to hundreds of m) than predicted by numerical modelling (thousands of m). However, the accumulated deformation during repeated, future seismic event has to be accounted for

  1. Overview of fuel behaviour and core degradation, based on modelling analyses. Overview of fuel behaviour and core degradation, on the basis of modelling results

    International Nuclear Information System (INIS)

    Massara, Simone

    2013-01-01

    Since the very first hours after the accident at Fukushima-Daiichi, numerical simulations by means of severe accident codes have been carried out, aiming at highlighting the key physical phenomena allowing a correct understanding of the sequence of events, and - on a long enough timeline - improving models and methods, in order to reduce the discrepancy between calculated and measured data. A last long-term objective is to support the future decommissioning phase. The presentation summarises some of the available elements on the role of the fuel/cladding-water interaction, which became available only through modelling because of the absence of measured data directly related to the cladding-steam interaction. This presentation also aims at drawing some conclusions on the status of the modelling capabilities of current tools, particularly for the purpose of the foreseen application to ATF fuels: - analyses with MELCOR, MAAP, THALES2 and RELAP5 are presented; - input data are taken from BWR Mark-I Fukushima-Daiichi Units 1, 2 and 3, completed with operational data published by TEPCO. In the case of missing or incomplete data or hypotheses, these are adjusted to reduce the calculation/measurement discrepancy. The behaviour of the accident is well understood on a qualitative level (major trends on RPV pressure and water level, dry-wet and PCV pressure are well represented), allowing a certain level of confidence in the results of the analysis of the zirconium-steam reaction - which is accessible only through numerical simulations. These show an extremely fast sequence of events (here for Unit 1): - the top of fuel is uncovered in 3 hours (after the tsunami); - the steam line breaks at 6.5 hours. Vessel dries at 10 hours, with a heat-up rate in a first moment driven by the decay heat only (∼7 K/min) and afterwards by the chemical heat from Zr-oxidation (over 30 K/min), associated with massive hydrogen production. It appears that the level of uncertainty increases with

  2. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamson, S. [Wisconsin Univ., Madison, WI (United States); Bender, M.A. [Brookhaven National Lab., Upton, NY (United States); Boecker, B.B.; Scott, B.R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States). Inhalation Toxicology Research Inst.; Gilbert, E.S. [Pacific Northwest Lab., Richland, WA (United States)

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  3. EFFECTS OF COOPERATIVE LEARNING MODEL TYPE STAD JUST-IN TIME BASED ON THE RESULTS OF LEARNING TEACHING PHYSICS COURSE IN PHYSICS SCHOOL IN PHYSICS PROGRAM FACULTY UNIMED

    Directory of Open Access Journals (Sweden)

    Teguh Febri Sudarma

    2013-06-01

    Full Text Available Research was aimed to determine: (1 Students’ learning outcomes that was taught with just in time teaching based STAD cooperative learning method and STAD cooperative learning method (2 Students’ outcomes on Physics subject that had high learning activity compared with low learning activity. The research sample was random by raffling four classes to get two classes. The first class taught with just in time teaching based STAD cooperative learning method, while the second class was taught with STAD cooperative learning method. The instrument used was conceptual understanding that had been validated with 7 essay questions. The average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,47 higher than average gain values of students learning results with STAD cooperative learning method. The high learning activity and low learning activity gave different learning results. In this case the average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,48 higher than average gain values of students learning results with STAD cooperative learning method. There was interaction between learning model and learning activity to the physics learning result test in students

  4. Data bases for LDEF results

    Science.gov (United States)

    Bohnhoff-Hlavacek, Gail

    1993-01-01

    The Long Duration Exposure Facility (LDEF) carried 57 experiments and 10,000 specimens for some 200 LDEF experiment investigators. The external surface of LDEF had a large variety of materials exposed to the space environment which were tested preflight, during flight, and post flight. Thermal blankets, optical materials, thermal control paints, aluminum, and composites are among the materials flown. The investigations have produced an abundance of analysis results. One of the responsibilities of the Boeing Support Contract, Materials and Systems Special Investigation Group, is to collate and compile that information into an organized fashion. The databases developed at Boeing to accomplish this task is described.

  5. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  6. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  7. Ocean EcoSystem Modelling Based on Observations from Satellite and In-Situ Data: First Results from the OSMOSIS Project

    Science.gov (United States)

    Rio, M.-H.; Buongiorno-Nardelli, B.; Calmettes, B.; Conchon, A.; Droghei, R.; Guinehut, S.; Larnicol, G.; Lehodey, P.; Matthieu, P. P.; Mulet, S.; Santoleri, R.; Senina, I.; Stum, J.; Verbrugge, N.

    2015-12-01

    Micronekton organisms are both the prey of large ocean predators, and themselves also the predators of eggs and larvae of many species from which most fishes. The micronekton biomass concentration is therefore a key explanatory variable that is usually missing in fish population and ecosystem models to understand individual behaviour and population dynamics of large oceanic predators. In that context, the OSMOSIS (Ocean ecoSystem Modelling based on Observations from Satellite and In-Situ data) ESA project aims at demonstrating the feasibility and prototyping an integrated system going from the synergetic use of many different variables measured from space to the modelling of the distribution of micronektonic organisms. In this paper, we present how data from CRYOSAT, GOCE, SMOS, ENVISAT, together with other non-ESA satellites and in-situ data, can be merged to provide the required key variables needed as input of the micronekton model. Also, first results from the optimization of the micronekton model are presented and discussed.

  8. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK

    Science.gov (United States)

    Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.

    2014-12-01

    Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.

  9. Assessing knowledge ambiguity in the creation of a model based on expert knowledge and comparison with the results of a landscape succession model in central Labrador. Chapter 10.

    Science.gov (United States)

    Frederik Doyon; Brian Sturtevant; Michael J. Papaik; Andrew Fall; Brian Miranda; Daniel D. Kneeshaw; Christian Messier; Marie-Josee. Fortin; Patrick M.A. James

    2012-01-01

    Sustainable forest management (SFM) recognizes that the spatial and temporal patterns generated at different scales by natural landscape and stand dynamics processes should serve as a guide for managing the forest within its range of natural variability. Landscape simulation modeling is a powerful tool that can help encompass such complexity and support SFM planning....

  10. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

    2016-01-01

    To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  11. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  12. Modelling Extortion Racket Systems: Preliminary Results

    Science.gov (United States)

    Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria

    Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.

  13. Population Physiologically-Based Pharmacokinetic Modeling for the Human Lactational Transfer of PCB 153 with Consideration of Worldwide Human Biomonitoring Results

    Energy Technology Data Exchange (ETDEWEB)

    Redding, Laurel E.; Sohn, Michael D.; McKone, Thomas E.; Wang, Shu-Li; Hsieh, Dennis P. H.; Yang, Raymond S. H.

    2008-03-01

    We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and compare predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The model successfully describes the range of possible PCB 153 dispositions in maternal milk, suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.

  14. Testing a hydraulic trait based model of stomatal control: results from a controlled drought experiment on aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas)

    Science.gov (United States)

    Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.

    2017-12-01

    Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.

  15. A points-based algorithm for prognosticating clinical outcome of Chiari malformation Type I with syringomyelia: results from a predictive model analysis of 82 surgically managed adult patients.

    Science.gov (United States)

    Thakar, Sumit; Sivaraju, Laxminadh; Jacob, Kuruthukulangara S; Arun, Aditya Atal; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Hegde, Alangar S

    2018-01-01

    OBJECTIVE Although various predictors of postoperative outcome have been previously identified in patients with Chiari malformation Type I (CMI) with syringomyelia, there is no known algorithm for predicting a multifactorial outcome measure in this widely studied disorder. Using one of the largest preoperative variable arrays used so far in CMI research, the authors attempted to generate a formula for predicting postoperative outcome. METHODS Data from the clinical records of 82 symptomatic adult patients with CMI and altered hindbrain CSF flow who were managed with foramen magnum decompression, C-1 laminectomy, and duraplasty over an 8-year period were collected and analyzed. Various preoperative clinical and radiological variables in the 57 patients who formed the study cohort were assessed in a bivariate analysis to determine their ability to predict clinical outcome (as measured on the Chicago Chiari Outcome Scale [CCOS]) and the resolution of syrinx at the last follow-up. The variables that were significant in the bivariate analysis were further analyzed in a multiple linear regression analysis. Different regression models were tested, and the model with the best prediction of CCOS was identified and internally validated in a subcohort of 25 patients. RESULTS There was no correlation between CCOS score and syrinx resolution (p = 0.24) at a mean ± SD follow-up of 40.29 ± 10.36 months. Multiple linear regression analysis revealed that the presence of gait instability, obex position, and the M-line-fourth ventricle vertex (FVV) distance correlated with CCOS score, while the presence of motor deficits was associated with poor syrinx resolution (p ≤ 0.05). The algorithm generated from the regression model demonstrated good diagnostic accuracy (area under curve 0.81), with a score of more than 128 points demonstrating 100% specificity for clinical improvement (CCOS score of 11 or greater). The model had excellent reliability (κ = 0.85) and was validated with

  16. Flying Training Capacity Model: Initial Results

    National Research Council Canada - National Science Library

    Lynch, Susan

    2005-01-01

    OBJECTIVE: (1) Determine the flying training capacity for 6 bases: * Sheppard AFB * Randolph AFB * Moody AFB * Columbus AFB * Laughlin AFB * Vance AFB * (2) Develop versatile flying training capacity simulation model for AETC...

  17. Burden and outcomes of pressure ulcers in cancer patients receiving the Kerala model of home based palliative care in India: Results from a prospective observational study

    Directory of Open Access Journals (Sweden)

    Biji M Sankaran

    2015-01-01

    Full Text Available Aim: To report the prevalence and outcomes of pressure ulcers (PU seen in a cohort of cancer patients requiring home-based palliative care. Materials and Methods: All patients referred for home care were eligible for this prospective observational study, provided they were living within a distance of 35 km from the institute and gave informed consent. During each visit, caregivers were trained and educated for providing nursing care for the patient. Dressing material for PU care was provided to all patients free of cost and care methods were demonstrated. Factors influencing the occurrence and healing of PUs were analyzed using logistic regression. Duration for healing of PU was calculated using the Kaplan Meier method. P < 0.05 are taken as significant. Results: Twenty-one of 108 (19.4% enrolled patients had PU at the start of homecare services. None of the patients developed new PU during the course of home care. Complete healing of PU was seen in 9 (42.9% patients. The median duration for healing of PU was found to be 56 days. Median expenditure incurred in patients with PU was Rs. 2323.40 with a median daily expenditure of Rs. 77.56. Conclusions: The present model of homecare service delivery was found to be effective in the prevention and management of PUs. The high prevalence of PU in this cohort indicates a need for greater awareness for this complication. Clinical Trial Registry Number: CTRI/2014/03/004477

  18. New results in the Dual Parton Model

    International Nuclear Information System (INIS)

    Van, J.T.T.; Capella, A.

    1984-01-01

    In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions

  19. Exploratory modeling and simulation to support development of motesanib in Asian patients with non-small cell lung cancer based on MONET1 study results.

    Science.gov (United States)

    Claret, L; Bruno, R; Lu, J-F; Sun, Y-N; Hsu, C-P

    2014-04-01

    The motesanib phase III MONET1 study failed to show improvement in overall survival (OS) in non-small cell lung cancer, but a subpopulation of Asian patients had a favorable outcome. We performed exploratory modeling and simulations based on MONET1 data to support further development of motesanib in Asian patients. A model-based estimate of time to tumor growth was the best of tested tumor size response metrics in a multivariate OS model (P Simulations indicated that a phase III study in 500 Asian patients would exceed 80% power to confirm superior efficacy of motesanib combination therapy (expected HR: 0.74), suggesting that motesanib combination therapy may benefit Asian patients.

  20. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  1. Medium-dose-rate brachytherapy of cancer of the cervix: preliminary results of a prospectively designed schedule based on the linear-quadratic model

    International Nuclear Information System (INIS)

    Leborgne, Felix; Fowler, Jack F.; Leborgne, Jose H.; Zubizarreta, Eduardo; Curochquin, Rene

    1999-01-01

    Purpose: To compare results and complications of our previous low-dose-rate (LDR) brachytherapy schedule for early-stage cancer of the cervix, with a prospectively designed medium-dose-rate (MDR) schedule, based on the linear-quadratic model (LQ). Methods and Materials: A combination of brachytherapy, external beam pelvic and parametrial irradiation was used in 102 consecutive Stage Ib-IIb LDR treated patients (1986-1990) and 42 equally staged MDR treated patients (1994-1996). The planned MDR schedule consisted of three insertions on three treatment days with six 8-Gy brachytherapy fractions to Point A, two on each treatment day with an interfraction interval of 6 hours, plus 18 Gy external whole pelvic dose, and followed by additional parametrial irradiation. The calculated biologically effective dose (BED) for tumor was 90 Gy 10 and for rectum below 125 Gy 3 . Results: In practice the MDR brachytherapy schedule achieved a tumor BED of 86 Gy 10 and a rectal BED of 101 Gy 3 . The latter was better than originally planned due to a reduction from 85% to 77% in the percentage of the mean dose to the rectum in relation to Point A. The mean overall treatment time was 10 days shorter for MDR in comparison with LDR. The 3-year actuarial central control for LDR and MDR was 97% and 98% (p = NS), respectively. The Grades 2 and 3 late complications (scale 0 to 3) were 1% and 2.4%, respectively for LDR (3-year) and MDR (2-year). Conclusions: LQ is a reliable tool for designing new schedules with altered fractionation and dose rates. The MDR schedule has proven to be an equivalent treatment schedule compared with LDR, with an additional advantage of having a shorter overall treatment time. The mean rectal BED Gy 3 was lower than expected

  2. Evaluation of a comprehensive EHR based on the DeLone and McLean model for IS success: approach, results, and success factors.

    Science.gov (United States)

    Bossen, Claus; Jensen, Lotte Groth; Udsen, Flemming Witt

    2013-10-01

    difficult, but was required because a key role was to inform decision-making upon enrollment at other hospitals and systematically identify barriers in this respect. The strength of the evaluation is the mixed-methods approach. Further, the evaluation was based on assessments from staff in two departments that comprise around 50% of hospital staff. A weakness may be that staff assessment plays a major role in interviews and survey. These though are supplemented by performance data and observation. Also, the evaluation primarily reports upon the dimension 'user satisfaction', since use of the EHR is mandatory. Finally, generalizability may be low, since the evaluation was not based on a validated survey. All in all, however, the evaluation proposes an evaluation design in constrained circumstances. Despite inherent limitations, evaluation of a comprehensive EHR shortly after implementation may be necessary, can be conducted, and may inform political decision making. The updated DeLone and McLean framework was constructive in the overall design of the evaluation of the EHR implementation, and allowed the model to be adapted to the health care domain by being methodological flexible. The mixed-methods case study produced valid and reliable results, and was accepted by staff, system providers, and political decision makers. The successful implementation may be attributed to the configurability of the EHR and to factors such as an experienced, competent implementation organization at the hospital, upgraded soft- and hardware, and a high degree of user involvement. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Prospects of an alternative treatment against Trypanosoma cruzi based on abietic acid derivatives show promising results in Balb/c mouse model.

    Science.gov (United States)

    Olmo, F; Guardia, J J; Marin, C; Messouri, I; Rosales, M J; Urbanová, K; Chayboun, I; Chahboun, R; Alvarez-Manzaneda, E J; Sánchez-Moreno, M

    2015-01-07

    Chagas disease, caused by the protozoa parasite Trypanosoma cruzi, is an example of extended parasitaemia with unmet medical needs. Current treatments based on old-featured benznidazole (Bz) and nifurtimox are expensive and do not fulfil the criteria of effectiveness, and a lack of toxicity devoid to modern drugs. In this work, a group of abietic acid derivatives that are chemically stable and well characterised were introduced as candidates for the treatment of Chagas disease. In vitro and in vivo assays were performed in order to test the effectiveness of these compounds. Finally, those which showed the best activity underwent additional studies in order to elucidate the possible mechanism of action. In vitro results indicated that some compounds have low toxicity (i.e. >150 μM, against Vero cell) combined with high efficacy (i.e. <20 μM) against some forms of T. cruzi. Further in vivo studies on mice models confirmed the expectations of improvements in infected mice. In vivo tests on the acute phase gave parasitaemia inhibition values higher those of Bz, and a remarkable decrease in the reactivation of parasitaemia was found in the chronic phase after immunosuppression of the mice treated with one of the compounds. The morphological alterations found in treated parasites with our derivatives confirmed extensive damage; energetic metabolism disturbances were also registered by (1)H NMR. The demonstrated in vivo activity and low toxicity, together with the use of affordable starting products and the lack of synthetic complexity, put these abietic acid derivatives in a remarkable position toward the development of an anti-Chagasic agent. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  4. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  5. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  6. Linkage of PRA models. Phase 1, Results

    International Nuclear Information System (INIS)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute

  7. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  8. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  9. Teaching-based research: Models of and experiences with students doing research and inquiry – results from a university-wide initiative in a research-intensive environment

    DEFF Research Database (Denmark)

    Rump, Camilla Østerberg; Damsholt, Tine; Sandberg, Marie

    , where students coproduce knowledge together with teachers. Two case studies, (3) and (4), also relate to students engaging in research-like activities, where students are engaged in inquiry, but do not produce new knowledge as such. One project was done across faculties (3), one was done...... a two-dimensional model distinguish between different research-based forms of teaching: Research-led: Students are mainly an audience, emphasis on research content • Students learn about current research in the discipline. Research-oriented: Students are mainly an audience, emphasis on research...... processes and problems • Students develop research skills and techniques. Research-based: Student are active, emphasis on research processes and problems • Students undertake research and inquiry. Research-tutored: Student are active, emphasis on research content • Students engage in research discussions...

  10. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  11. Loss of spastin function results in disease-specific axonal defects in human pluripotent stem cell-based models of hereditary spastic paraplegia

    Science.gov (United States)

    Denton, Kyle R.; Lei, Ling; Grenier, Jeremy; Rodionov, Vladimir; Blackstone, Craig; Li, Xue-Jun

    2013-01-01

    Human neuronal models of hereditary spastic paraplegias (HSP) that recapitulate disease-specific axonal pathology hold the key to understanding why certain axons degenerate in patients and to developing therapies. SPG4, the most common form of HSP, is caused by autosomal dominant mutations in the SPAST gene, which encodes the microtubule-severing ATPase spastin. Here, we have generated a human neuronal model of SPG4 by establishing induced pluripotent stem cells (iPSCs) from an SPG4 patient and differentiating these cells into telencephalic glutamatergic neurons. The SPG4 neurons displayed a significant increase in axonal swellings, which stained strongly for mitochondria and tau, indicating the accumulation of axonal transport cargoes. In addition, mitochondrial transport was decreased in SPG4 neurons, revealing that these patient iPSC-derived neurons recapitulate disease-specific axonal phenotypes. Interestingly, spastin protein levels were significantly decreased in SPG4 neurons, supporting a haploinsufficiency mechanism. Furthermore, cortical neurons derived from spastin-knockdown human embryonic stem cells (hESCs) exhibited similar axonal swellings, confirming that the axonal defects can be caused by loss of spastin function. These spastin-knockdown hESCs serve as an additional model for studying HSP. Finally, levels of stabilized acetylated-tubulin were significantly increased in SPG4 neurons. Vinblastine, a microtubule-destabilizing drug, rescued this axonal swelling phenotype in neurons derived from both SPG4 iPSCs and spastin-knockdown hESCs. Thus, this study demonstrates the successful establishment of human pluripotent stem cell-based neuronal models of SPG4, which will be valuable for dissecting the pathogenic cellular mechanisms and screening compounds to rescue the axonal degeneration in HSP. PMID:24123785

  12. The EURAD model: Design and first results

    International Nuclear Information System (INIS)

    1989-01-01

    The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de

  13. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  14. Blood gas sample spiking with total parenteral nutrition, lipid emulsion, and concentrated dextrose solutions as a model for predicting sample contamination based on glucose result.

    Science.gov (United States)

    Jara-Aguirre, Jose C; Smeets, Steven W; Wockenfus, Amy M; Karon, Brad S

    2018-05-01

    Evaluate the effects of blood gas sample contamination with total parenteral nutrition (TPN)/lipid emulsion and dextrose 50% (D50) solutions on blood gas and electrolyte measurement; and determine whether glucose concentration can predict blood gas sample contamination with TPN/lipid emulsion or D50. Residual lithium heparin arterial blood gas samples were spiked with TPN/lipid emulsion (0 to 15%) and D50 solutions (0 to 2.5%). Blood gas (pH, pCO2, pO2), electrolytes (Na+, K+ ionized calcium) and hemoglobin were measured with a Radiometer ABL90. Glucose concentration was measured in separated plasma by Roche Cobas c501. Chart review of neonatal blood gas results with glucose >300 mg/dL (>16.65 mmol/L) over a seven month period was performed to determine whether repeat (within 4 h) blood gas results suggested pre-analytical errors in blood gas results. Results were used to determine whether a glucose threshold could predict contamination resulting in blood gas and electrolyte results with greater than laboratory-defined allowable error. Samples spiked with 5% or more TPN/lipid emulsion solution or 1% D50 showed glucose concentration >500 mg/dL (>27.75 mmol/L) and produced blood gas (pH, pO 2 , pCO 2 ) results with greater than laboratory-defined allowable error. TPN/lipid emulsion, but not D50, produced greater than allowable error in electrolyte (Na + ,K + ,Ca ++ ,Hb) results at these concentrations. Based on chart review of 144 neonatal blood gas results with glucose >250 mg/dL received over seven months, four of ten neonatal intensive care unit (NICU) patients with glucose results >500 mg/dL and repeat blood gas results within 4 h had results highly suggestive of pre-analytical error. Only 3 of 36 NICU patients with glucose results 300-500 mg/dL and repeat blood gas results within 4 h had clear pre-analytical errors in blood gas results. Glucose concentration can be used as an indicator of significant blood sample contamination with either TPN

  15. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  16. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  17. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  18. Results-based Rewards - Leveraging Wage Increases?

    DEFF Research Database (Denmark)

    Bregn, Kirsten

    2005-01-01

    A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages with a g......A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages...... limited use of RBR, illustrated with examples. The Danish experiences should give food for thought, given that pay systems used by the public sector are currently under transformation in practically all OECD countries....

  19. [The PROPRESE trial: results of a new health care organizational model in primary care for patients with chronic coronary heart disease based on a multifactorial intervention].

    Science.gov (United States)

    Ruescas-Escolano, Esther; Orozco-Beltran, Domingo; Gaubert-Tortosa, María; Navarro-Palazón, Ana; Cordero-Fort, Alberto; Navarro-Pérez, Jorge; Carratalá-Munuera, Concepción; Pertusa-Martínez, Salvador; Soler-Bahilo, Enrique; Brotons-Muntó, Francisco; Bort-Cubero, Jose; Núñez-Martínez, Miguel A; Bertomeu-Martínez, Vicente; López-Pineda, Adriana; Gil-Guillén, Vicente F

    2014-06-01

    Comparison of the results from the EUROASPIRE I to the EUROASPIRE III, in patients with coronary heart disease, shows that the prevalence of uncontrolled risk factors remains high. The aim of the study was to evaluate the effectiveness of a new multifactorial intervention in order to improve health care for chronic coronary heart disease patients in primary care. In this randomized clinical trial with a 1-year follow-up period, we recruited patients with a diagnosis of coronary heart disease (145 for the intervention group and 1461 for the control group). An organizational intervention on the patient-professional relationship (centered on the Chronic Care Model, the Stanford Expert Patient Programme and the Kaiser Permanente model) and formative strategy for professionals were carried out. The main outcomes were smoking control, low-density lipoprotein cholesterol (LDL-C), systolic blood pressure (SBP) and diastolic blood pressure (DBP). A multivariate analysis was performed. The characteristics of patients were: age (68.4±11.8 years), male (71.6%), having diabetes mellitus (51.3%), dyslipidemia (68.5%), arterial hypertension (76.7%), non-smokers (76.1%); LDL-C cardiovascular risk factors control (smoking, LDL-C and SBP). Chronic care strategies may be an efficacy tool to help clinicians to involve the patients with a diagnosis of CHD to reach better outcomes. Copyright © 2014 Elsevier España, S.L. All rights reserved.

  20. Spatial variability and trends in Younger Dryas equilibrium line altitudes across the European Alps using a hypsometrically based ELA model: results and implications

    Science.gov (United States)

    Keeler, D. G.; Rupper, S.; Schaefer, J. M.; Finkel, R. C.; Maurer, J. M.

    2016-12-01

    Alpine glaciers constitute an important component of terrestrial paleoclimate records due to, among other characteristics, their high sensitivity to climate change, near global extent, and their integration of myriad climate variables into a single, easily detected signal. Because the glacier equilibrium line altitude (ELA) provides a more explicit representation of climate than many other glacier properties, ELA methods allow for more direct comparisons of multiple glaciers within or between regions. Such comparisons allow for more complete investigations of the ultimate causes of mountain glaciation during specific events. Many studies however tend to focus on a limited number of sites, and employ a large variety of different techniques for ELA reconstruction between studies, making wider climate implications more tenuous. Methods of ELA reconstruction that can be rapidly and consistently applied to an arbitrary number of paleo-glaciers would provide a more accurate portrayal of the changes in climate across a given region. Here we present ELA reconstructions from Egesen Stadial moraines across the European Alps using an ELA model accounting for differences in glacier width, glacier shape, bed topography, ice thickness, and glacier length, including several glaciers constrained to the Younger Dryas using surface exposure dating techniques. We compare reconstructed Younger Dryas ELA values to modern ELA values using the same model, or using end of summer snowline estimates where no glacier is currently present. We further provide uncertainty estimates on the ΔELA using bootstrapped Monte Carlo simulations for the various input parameters. Preliminary results compare favorably to previous glacier studies of the European Younger Dryas, but provide greater context from many glaciers across the region as a whole. Such results allow for a more thorough investigation of the spatial variability and trends in climate during the Younger Dryas across the European Alps, and

  1. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  2. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  3. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  4. Cryptography based on neural networks - analytical results

    International Nuclear Information System (INIS)

    Rosen-Zvi, Michal; Kanter, Ido; Kinzel, Wolfgang

    2002-01-01

    The mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations. (letter to the editor)

  5. Employment Effects of Renewable Energy Expansion on a Regional Level—First Results of a Model-Based Approach for Germany

    Directory of Open Access Journals (Sweden)

    Ulrike Lehr

    2012-02-01

    Full Text Available National studies have shown that both gross and net effects of the expansion of energy from renewable sources on employment are positive for Germany. These modeling approaches also revealed that this holds true for both present and future perspectives under certain assumptions on the development of exports, fossil fuel prices and national politics. Yet how are employment effects distributed within Germany? What components contribute to growth impacts on a regional level? To answer these questions (new methods of regionalization were explored and developed for the example “wind energy onshore” for Germany’s federal states. The main goal was to develop a methodology which is applicable to all renewable energy technologies in future research. For the quantification and projection, it was necessary to distinguish between jobs generated by domestic investments and exports on the one hand, and jobs for operation and maintenance of existing plants on the other hand. Further, direct and indirect employment is analyzed. The results show, that gross employment is particularly high in the northwestern regions of Germany. However, especially the indirect effects are spread out over the whole country. Regions in the south not only profit from the delivery of specific components, but also from other industry and service inputs.

  6. Discussion of gas trade model (GTM) results

    International Nuclear Information System (INIS)

    Manne, A.

    1989-01-01

    This is in response to your invitation to comment on the structure of GTM and also upon the differences between its results and those of other models participating in EMF9. First a word upon the structure. GTM was originally designed to provide both regional and sectoral detail within the North American market for natural gas at a single point in time, e.g. the year 2000. It is a spatial equilibrium model in which a solution is obtained by maximizing a nonlinear function, the sum of consumers and producers surplus. Since transport costs are included in producers cost, this formulation automatically ensures that geographical price differentials will not differ by more than transport costs. For purposes of EMF9, GTM was modified to allow for resource development and depletion over time

  7. Analysis of Current and Future SPEI Droughts in the La Plata Basin Based on Results from the Regional Eta Climate Model

    Directory of Open Access Journals (Sweden)

    Alvaro Sordo-Ward

    2017-11-01

    Full Text Available We identified and analysed droughts in the La Plata Basin (divided into seven sub-basins for the current period (1961–2005 and estimated their expected evolution under future climate projections for the periods 2011–2040, 2041–2070, and 2071–2099. Future climate projections were analysed from results of the Eta Regional Climate Model (grid resolution of approximately 10 km forced by the global climate model HadGEM2-ES over the La Plata basin, and considering a RCP4.5 emission scenario. Within each sub-basin, we particularly focused our drought analyses on croplands and grasslands, due to their economic relevance. The three-month Standardized Precipitation Evapotranspiration Index (SPEI3 was used for drought identification and characterization. Droughts were evaluated in terms of time (percentage of time from the total length of each climate scenario, space (percentage of total area, and severity (SPEI3 values of cells characterized by cropland and grassland for each sub-basin and climate scenario. Drought-severity–area–frequency curves were developed to quantitatively relate the frequency distribution of drought occurrence to drought severity and area. For the period 2011–2040, droughts dominate the northern sub-basins, whereas alternating wet and short dry periods dominate the southern sub-basins. Wet climate spread from south to north within the La Plata Basin as more distant future scenarios were analysed, due to both a greater number of wet periods and fewer droughts. The area of each sub-basin affected by drought in all climate scenarios was highly varied temporally and spatially. The likelihood of the occurrence of droughts differed significantly between the studied cover types in the Lower Paraguay sub-basin, being higher for cropland than for grassland. Mainly in the Upper Paraguay and in the Upper Paraná basins the climate projections for all scenarios showed an increase of moderate and severe droughts over large regions

  8. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  9. Modeling dry and wet deposition of sulfate, nitrate, and ammonium ions in Jiuzhaigou National Nature Reserve, China using a source-oriented CMAQ model: Part I. Base case model results.

    Science.gov (United States)

    Qiao, Xue; Tang, Ya; Hu, Jianlin; Zhang, Shuai; Li, Jingyi; Kota, Sri Harsha; Wu, Li; Gao, Huilin; Zhang, Hongliang; Ying, Qi

    2015-11-01

    A source-oriented Community Multiscale Air Quality (CMAQ) model driven by the meteorological fields generated by the Weather Research and Forecasting (WRF) model was used to study the dry and wet deposition of nitrate (NO3(-)), sulfate (SO4(2-)), and ammonium (NH4(+)) ions in the Jiuzhaigou National Nature Reserve (JNNR), China from June to August 2010 and to identify the contributions of different emission sectors and source regions that were responsible for the deposition fluxes. The model performance is evaluated in this paper and the source contribution analyses are presented in a companion paper. The results show that WRF is capable of reproducing the observed precipitation rates with a Mean Normalized Gross Error (MNGE) of 8.1%. Predicted wet deposition fluxes of SO4(2-) and NO3(-) at the Long Lake (LL) site (3100 m a.s.l.) during the three-month episode are 2.75 and 0.34 kg S(N) ha(-1), which agree well with the observed wet deposition fluxes of 2.42 and 0.39 kg S(N) ha(-1), respectively. Temporal variations in the weekly deposition fluxes at LL are also well predicted. Wet deposition flux of NH4(+) at LL is over-predicted by approximately a factor of 3 (1.60 kg N ha(-1)vs. 0.56 kg N ha(-1)), likely due to missing alkaline earth cations such as Ca(2+) in the current CMAQ simulations. Predicted wet deposition fluxes are also in general agreement with observations at four Acid Deposition Monitoring Network in East Asia (EANET) sites in western China. Predicted dry deposition fluxes of SO4(2-) (including gas deposition of SO2) and NO3(-) (including gas deposition of HNO3) are 0.12 and 0.12 kg S(N) h a(-1) at LL and 0.07 and 0.08 kg S(N) ha(-1) at Jiuzhaigou Bureau (JB) in JNNR, respectively, which are much lower than the corresponding wet deposition fluxes. Dry deposition flux of NH4(+) (including gas deposition of NH3) is 0.21 kg N ha(-1) at LL, and is also much lower than the predicted wet deposition flux. For both dry and wet deposition fluxes, predictions

  10. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  11. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  12. Slice-based supine to standing postured deformation for chinese anatomical models and the dosimetric results by wide band frequency electromagnetic field exposure: Morphing

    International Nuclear Information System (INIS)

    Wu, T.; Tan, L.; Shao, Q.; Li, Y.; Yang, L.; Zhao, C.; Xie, Y.; Zhang, S.

    2013-01-01

    Digital human models are frequently obtained from supine-postured medical images or cadaver slices, but many applications require standing models. This paper presents the work of reconstructing standing Chinese adult anatomical models from supine postured slices. Apart from the previous studies, the deformation works on 2-D segmented slices. The surface profile of the standing posture is adjusted by population measurement data. A non-uniform texture amplification approach is applied on the 2-D slices to recover the skin contour and to redistribute the internal tissues. Internal organ shift due to postures is taken into account. The feet are modified by matrix rotation. Then, the supine and standing models are utilised for the evaluation of electromagnetic field exposure over wide band frequency and different incident directions. . (authors)

  13. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model

    Science.gov (United States)

    Settele, Josef; Dormann, Carsten F.

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  14. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Science.gov (United States)

    Everaars, Jeroen; Settele, Josef; Dormann, Carsten F

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  15. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Directory of Open Access Journals (Sweden)

    Jeroen Everaars

    Full Text Available Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness, number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study, for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1 Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2 Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3 Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response

  16. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  17. Calculation of limits for significant unidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2015-01-01

    BACKGROUND: Reference change values (RCVs) were introduced more than 30 years ago and provide objective tools for assessment of the significance of differences in two consecutive results from an individual. However, in practice, more results are usually available for monitoring, and using the RCV...... the presented factors. The first result is multiplied by the appropriate factor for increase or decrease, which gives the limits for a significant difference.......BACKGROUND: Reference change values (RCVs) were introduced more than 30 years ago and provide objective tools for assessment of the significance of differences in two consecutive results from an individual. However, in practice, more results are usually available for monitoring, and using the RCV......,000 simulated data from healthy individuals, a series of up to 20 results from an individual was generated using different values for the within-subject biological variation plus the analytical variation. Each new result in this series was compared to the initial measurement result. These successive serial...

  18. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  19. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  20. Issues in practical model-based diagnosis

    NARCIS (Netherlands)

    Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.

    1993-01-01

    The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of

  1. Slice-based supine-to-standing posture deformation for chinese anatomical models and the dosimetric results with wide band frequency electromagnetic field exposure: Simulation

    International Nuclear Information System (INIS)

    Wu, T.; Tan, L.; Shao, Q.; Li, Y.; Yang, L.; Zhao, C.; Xie, Y.; Zhang, S.

    2013-01-01

    Standing Chinese adult anatomical models are obtained from supine-postured cadaver slices. This paper presents the dosimetric differences between the supine and the standing postures over wide band frequencies and various incident configurations. Both the body level and the tissue/organ level differences are reported for plane wave and the 3T magnetic resonance imaging radiofrequency electromagnetic field exposure. The influence of posture on the whole body specific absorption rate and tissue specified specific absorption rate values is discussed. . (authors)

  2. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  3. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2012-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  4. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant§ refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  5. Risk based modelling

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Baker, A.E.

    1993-01-01

    Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)

  6. Experimental Results and Model Calculations of a Hybrid Adsorption-Compression Heat Pump Based on a Roots Compressor and Silica Gel-Water Sorption

    Energy Technology Data Exchange (ETDEWEB)

    Van der Pal, M.; De Boer, R.; Wemmers, A.K.; Smeding, S.F.; Veldhuis, J.B.J.; Lycklama a Nijeholt, J.A.

    2013-10-15

    Thermally driven sorption systems can provide significant energy savings, especially in industrial applications. The driving temperature for operation of such systems limits the operating window and can be a barrier for market-introduction. By adding a compressor, the sorption cycle can be run using lower waste heat temperatures. ECN has recently started the development of such a hybrid heat pump. The final goal is to develop a hybrid heat pump for upgrading lower (<100C) temperature industrial waste heat to above pinch temperatures. The paper presents the first measurements and model calculations of a hybrid heat pump system using a water-silica gel system combined with a Roots type compressor. From the measurements can be seen that the effect of the compressor is dependent on where in the cycle it is placed. When placed between the evaporator and the sorption reactor, it has a considerable larger effect compared to the compressor placed between the sorption reactor and the condenser. The latter hardly improves the performance compared to purely heat-driven operation. This shows the importance of studying the interaction between all components of the system. The model, which shows reasonable correlation with the measurements, could proof to be a valuable tool to determine the optimal hybrid heat pump configuration.

  7. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  8. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  9. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  10. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  11. CIEMAT model results for Esthwaite Water

    International Nuclear Information System (INIS)

    Aguero, A.; Garcia-Olivares, A.

    2000-01-01

    This study used the transfer model PRYMA-LO, developed by CIEMAT-IMA, Madrid, Spain, to simulate the transfer of Cs-137 in watershed scenarios. The main processes considered by the model include: transfer of the fallout to the ground, incorporation of the fallout radioisotopes into the water flow, and their removal from the system. The model was tested against observation data obtained in water and sediments of Esthwaite Water, Lake District, UK. This comparison made it possible to calibrate the parameters of the model to the specific scenario

  12. LITHOSPHERIC STRUCTURE OF THE CARPATHIAN-PANNONIAN REGION BASED ON THE GRAVITY MODELING BY INTEGRATING THE CELEBRATION2000 SEISMIC EXPERIMENT AND NEW GEOPHYSICAL RESULTS

    Science.gov (United States)

    Bielik, M.; Alasonati Tašárová, Z.; Zeyen, H. J.; Afonso, J.; Goetze, H.; Dérerová, J.

    2009-12-01

    Two different methods for the 3-D interpretation of the gravity field have been applied to the study of the structure and tectonics of the Carpathian-Pannonian lithosphere. The first (second) method provided a set of the different stripped gravity maps (the new lithosphere thickness map). The contribution presents the interpretation of the gravity field, which takes into account the CELEBRATION2000 seismic as well as new geophysical results. The sediment stripped gravity map is characterized by gravity minima in the Eastern Alps and Western Carpathians, and gravity maxima in the Pannonian Back-arc Basin system and the European platform. The gravity low in the Eastern Alps is produced by the thick crust (more than 45 km). The Western Carpathian gravity minimum is a result of the interference of two main gravitational effects. The first one comes from the low-density sediments of the Outer Western Carpathians and Carpathian Foredeep. The second one is due to the thick low-density upper and middle crust, reaching up to 25 km. In the Pannonian Back-arc Basin system can be observed the regional gravity high which is a result of the gravity effect of the anomalously shallow Moho. The most dominant feature of the complete 3-D stripped gravity map (crustal gravity effect map) is the abrupt change of the gravity field along the Klippen Belt zone. While the European platform is characterized by positive anomalies, the Western Carpathian orogen and the Pannonian Back-arc Basin system by relatively long-wavelength gravity low (several hundred kilometers). The lowest values are associated with the thick low-density upper and middle crust of the Inner Western Carpathians. That is why we suggest that the European Platform consists of the significantly denser crust with respect to the less dense crust of the microplates ALCAPA and Tisza-Dacia. The contrast in the gravity fields over the European platform and microplates ALCAPA and Tisza-Dacia reflect also their different crustal

  13. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  14. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  15. Graphical interpretation of numerical model results

    International Nuclear Information System (INIS)

    Drewes, D.R.

    1979-01-01

    Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements

  16. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  17. Ignalina NPP Safety Analysis: Models and Results

    International Nuclear Information System (INIS)

    Uspuras, E.

    1999-01-01

    Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)

  18. Tests results of skutterudite based thermoelectric unicouples

    International Nuclear Information System (INIS)

    Saber, Hamed H.; El-Genk, Mohamed S.; Caillat, Thierry

    2007-01-01

    Tests were performed of skutterudite based unicouples with (MAY-04) and without (MAR-03) metallic coating on the legs near the hot junction to quantify the effect on reducing performance degradation with operation time. The p-legs in the unicouples were made of CeFe 3.5 Co 0.5 Sb 12 and the n-legs of CoSb 3 . The MAY-04 test was performed in vacuum (∼9 x 10 -7 torr) for ∼2000 h at hot and cold junction temperatures of 892.1 ± 11.9 K and 316.1 ± 5.5 K, respectively, while the MAR-03 test was performed in argon cover gas (0.051-0.068 MPa) at 972.61 ± 10.0 K and 301.1 ± 5.1 K, respectively. The argon cover gas decreased antimony loss from the legs in the MAR-03 test, but marked degradation in performance occurred over time. Conversely, the metallic coating in the MAY-04 test was very effective in reducing performance degradation of the unicouple. Because the cross sectional areas of the legs in MAY-04 were larger than those in MAR-03, the measured electrical power of the former is much higher than that of the latter, but the Beginning of Test (BOT) open circuit voltages, V oc (204.2 mV) for both unicouples were almost the same. The peak electrical power of the MAY-04 unicouple decreased 12.35% from 1.62W e at BOT to 1.42W e after ∼2000 h of testing, while that of the MAR-03 unicouple decreased 25.37% from 0.67 to 0.5W e after 261 h of testing at the above temperatures. The estimated peak efficiency of the MAY-04 unicouple, shortly after BOT (10.65%), was only ∼0.37% points lower than the theoretical value, calculated assuming zero side heat losses and zero contact resistance per leg

  19. Microplasticity of MMC. Experimental results and modelling

    International Nuclear Information System (INIS)

    Maire, E.; Lormand, G.; Gobin, P.F.; Fougeres, R.

    1993-01-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.)

  20. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  1. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  2. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  3. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  4. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  5. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  6. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  7. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  8. Storm-time ring current: model-dependent results

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2012-01-01

    Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.

  9. Model-based consensus

    NARCIS (Netherlands)

    Boumans, M.; Martini, C.; Boumans, M.

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  10. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  11. Comparison of a theory-based (AIDS Risk Reduction Model) cognitive behavioral intervention versus enhanced counseling for abused ethnic minority adolescent women on infection with sexually transmitted infection: results of a randomized controlled trial.

    Science.gov (United States)

    Champion, Jane Dimmitt; Collins, Jennifer L

    2012-02-01

    Ethnic minority adolescent women with a history of sexual or physical abuse and sexually transmitted infections represent a vulnerable population at risk for HIV. Community-based interventions for behavior modification and subsequent risk reduction have not been effective among these women. To evaluate the effects of a theory-based (AIDS Risk Reduction Model) cognitive behavioral intervention model versus enhanced counseling for abused ethnic minority adolescent women on infection with sexually transmitted infection at 6 and 12 months follow-up. Controlled randomized trial with longitudinal follow-up. Southwestern United States, Metropolitan community-based clinic. Mexican-and-African American adolescent women aged 14-18 years with a history of abuse or sexually transmitted infection seeking sexual health care. Extensive preliminary study for intervention development was conducted including individual interviews, focus groups, secondary data analysis, pre-testing and feasibility testing for modification of an evidence-based intervention prior to testing in the randomized controlled trial. Following informed consents for participation in the trial, detailed interviews concerning demographics, abuse history, sexual risk behavior, sexual health and physical exams were obtained. Randomization into either control or intervention groups was conducted. Intervention participants received workshop, support group and individual counseling sessions. Control participants received abuse and enhanced clinical counseling. Follow-up including detailed interview and physical exam was conducted at 6 and 12 months following study entry to assess for infection. Intention to treat analysis was conducted to assess intervention effects using chi-square and multiple regression models. 409 Mexican-(n=342) and African-(n=67) American adolescent women with abuse and sexually transmitted infection histories were enrolled; 90% intervention group attendance; longitudinal follow-up at 6 (93

  12. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  13. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  14. Results of the ITER toroidal field model coil project

    International Nuclear Information System (INIS)

    Salpietro, E.; Maix, R.

    2001-01-01

    In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved

  15. U.S. electric power sector transitions required to achieve 80% reductions in economy-wide greenhouse gas emissions: Results based on a state-level model of the U.S. energy system

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.

    2017-05-01

    The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.

  16. EPR-based material modelling of soils

    Science.gov (United States)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  17. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  18. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  19. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  20. SR-Site groundwater flow modelling methodology, setup and results

    International Nuclear Information System (INIS)

    Selroos, Jan-Olof; Follin, Sven

    2010-12-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report

  1. Geochemical controls on shale groundwaters: Results of reaction path modeling

    International Nuclear Information System (INIS)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs

  2. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  3. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  4. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  5. Model-based Abstraction of Data Provenance

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  6. Urban traffic noise assessment by combining measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.

    2013-01-01

    A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,

  7. INTRAVAL Finnsjoen Test - modelling results for some tracer experiments

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1994-09-01

    This report presents the results within Phase II of the INTRAVAL study. Migration experiments performed at the Finnsjoen test site were investigated. The study was done to gain an improved understanding of not only the mechanisms of tracer transport, but also the accuracy and limitations of the model used. The model is based on the concept of a dual porosity medium, taking into account one dimensional advection, longitudinal dispersion, sorption onto the fracture surfaces, diffusion into connected pores of the matrix rock, and sorption onto matrix surfaces. The number of independent water carrying zones, represented either as planar fractures or tubelike veins, may be greater than one, and the sorption processes are described either by linear or non-linear Freundlich isotherms assuming instantaneous sorption equilibrium. The diffusion of the tracer out of the water-carrying zones into connected pore space of the adjacent rock is calculated perpendicular to the direction of the advective/dispersive flow. In the analysis, the fluid flow parameters are calibrated by the measured breakthrough curves for the conservative tracer (iodide). Subsequent fits to the experimental data for the two sorbing tracers strontium and cesium then involve element dependent parameters providing information on the sorption processes and on its representation in the model. The methodology of fixing all parameters except those for sorption with breakthrough curves for non-sorbing tracers generally worked well. The investigation clearly demonstrates the necessity of taking into account pump flow rate variations at both boundaries. If this is not done, reliable conclusions on transport mechanisms or geometrical factors can not be achieved. A two flow path model reproduces the measured data much better than a single flow path concept. (author) figs., tabs., 26 refs

  8. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  9. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  10. Verification of aseismic design model by using experimental results

    International Nuclear Information System (INIS)

    Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.

    1985-01-01

    A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)

  11. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  12. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  13. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  14. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...

  15. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  16. Influence of Hydraulic Design on Stability and on Pressure Pulsations in Francis Turbines at Overload, Part Load and Deep Part Load based on Numerical Simulations and Experimental Model Test Results

    International Nuclear Information System (INIS)

    Magnoli, M V; Maiwald, M

    2014-01-01

    Francis turbines have been running more and more frequently in part load conditions, in order to satisfy the new market requirements for more dynamic and flexible energy generation, ancillary services and grid regulation. The turbines should be able to be operated for longer durations with flows below the optimum point, going from part load to deep part load and even speed-no-load. These operating conditions are characterised by important unsteady flow phenomena taking place at the draft tube cone and in the runner channels, in the respective cases of part load and deep part load. The current expectations are that new Francis turbines present appropriate hydraulic stability and moderate pressure pulsations at overload, part load, deep part load and speed-no-load with high efficiency levels at normal operating range. This study presents series of investigations performed by Voith Hydro with the objective to improve the hydraulic stability of Francis turbines at overload, part load and deep part load, reduce pressure pulsations and enlarge the know-how about the transient fluid flow through the turbine at these challenging conditions. Model test measurements showed that distinct runner designs were able to influence the pressure pulsation level in the machine. Extensive experimental investigations focused on the runner deflector geometry, on runner features and how they could reduce the pressure oscillation level. The impact of design variants and machine configurations on the vortex rope at the draft tube cone at overload and part load and on the runner channel vortex at deep part load were experimentally observed and evaluated based on the measured pressure pulsation amplitudes. Numerical investigations were employed for improving the understanding of such dynamic fluid flow effects. As example for the design and experimental investigations, model test observations and pressure pulsation curves for Francis machines in mid specific speed range, around n qopt = 50

  17. Some results on ethnic conflicts based on evolutionary game simulation

    Science.gov (United States)

    Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-07-01

    The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.

  18. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1995-09-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs

  19. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  20. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1996-01-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)

  1. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  2. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  3. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  4. Result diversification based on query-specific cluster ranking

    NARCIS (Netherlands)

    He, J.; Meij, E.; de Rijke, M.

    2011-01-01

    Result diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking, in which diversification

  5. Result Diversification Based on Query-Specific Cluster Ranking

    NARCIS (Netherlands)

    J. He (Jiyin); E. Meij; M. de Rijke (Maarten)

    2011-01-01

    htmlabstractResult diversification is a retrieval strategy for dealing with ambiguous or multi-faceted queries by providing documents that cover as many facets of the query as possible. We propose a result diversification framework based on query-specific clustering and cluster ranking,

  6. A subchannel based annular flow dryout model

    International Nuclear Information System (INIS)

    Hammouda, Najmeddine; Cheng, Zhong; Rao, Yanfei F.

    2016-01-01

    Highlights: • A modified annular flow dryout model for subchannel thermalhydraulic analysis. • Implementation of the model in Canadian subchannel code ASSERT-PV. • Assessment of the model against tube CHF experiments. • Assessment of the model against CANDU-bundle CHF experiments. - Abstract: This paper assesses a popular tube-based mechanistic critical heat flux model (Hewitt and Govan’s annular flow model (based on the model of Whalley et al.), and modifies and implements the model for bundle geometries. It describes the results of the ASSERT subchannel code predictions using the modified model, as applied to a single tube and the 28-element, 37-element and 43-element (CANFLEX) CANDU bundles. A quantitative comparison between the model predictions and experimental data indicates good agreement for a wide range of flow conditions. The comparison has resulted in an overall average error of −0.15% and an overall root-mean-square error of 5.46% with tube data representing annular film dryout type critical heat flux, and in an overall average error of −0.9% and an overall RMS error of 9.9% with Stern Laboratories’ CANDU-bundle data.

  7. Analysis of inelastic neutron scattering results on model compounds ...

    Indian Academy of Sciences (India)

    Vibrational spectroscopy; nitrogenous bases; inelastic neutron scattering. PACS No. ... obtain good quality, high resolution results in this region. Here the .... knowledge of the character of each molecular transition as well as the calculated.

  8. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  9. The effect of bathymetric filtering on nearshore process model results

    Science.gov (United States)

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  10. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  11. Dosage-based parameters for characterization of puff dispersion results.

    Science.gov (United States)

    Berbekar, Eva; Harms, Frank; Leitl, Bernd

    2015-01-01

    A set of parameters is introduced to characterize the dispersion of puff releases based on the measured dosage. These parameters are the dosage, peak concentration, arrival time, peak time, leaving time, ascent time, descent time and duration. Dimensionless numbers for the scaling of the parameters are derived from dimensional analysis. The dimensionless numbers are tested and confirmed based on a statistically representative wind tunnel dataset. The measurements were carried out in a 1:300 scale model of the Central Business District in Oklahoma City. Additionally, the effect of the release duration on the puff parameters is investigated. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Reconstructing Holocene climate using a climate model: Model strategy and preliminary results

    Science.gov (United States)

    Haberkorn, K.; Blender, R.; Lunkeit, F.; Fraedrich, K.

    2009-04-01

    An Earth system model of intermediate complexity (Planet Simulator; PlaSim) is used to reconstruct Holocene climate based on proxy data. The Planet Simulator is a user friendly general circulation model (GCM) suitable for palaeoclimate research. Its easy handling and the modular structure allow for fast and problem dependent simulations. The spectral model is based on the moist primitive equations conserving momentum, mass, energy and moisture. Besides the atmospheric part, a mixed layer-ocean with sea ice and a land surface with biosphere are included. The present-day climate of PlaSim, based on an AMIP II control-run (T21/10L resolution), shows reasonable agreement with ERA-40 reanalysis data. Combining PlaSim with a socio-technological model (GLUES; DFG priority project INTERDYNAMIK) provides improved knowledge on the shift from hunting-gathering to agropastoral subsistence societies. This is achieved by a data assimilation approach, incorporating proxy time series into PlaSim to initialize palaeoclimate simulations during the Holocene. For this, the following strategy is applied: The sensitivities of the terrestrial PlaSim climate are determined with respect to sea surface temperature (SST) anomalies. Here, the focus is the impact of regionally varying SST both in the tropics and the Northern Hemisphere mid-latitudes. The inverse of these sensitivities is used to determine the SST conditions necessary for the nudging of land and coastal proxy climates. Preliminary results indicate the potential, the uncertainty and the limitations of the method.

  13. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  14. Design of an impact evaluation using a mixed methods model--an explanatory assessment of the effects of results-based financing mechanisms on maternal healthcare services in Malawi.

    Science.gov (United States)

    Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela

    2014-04-22

    In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements

  15. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  16. Circulation in the Gulf of Trieste: measurements and model results

    International Nuclear Information System (INIS)

    Bogunovici, B.; Malacic, V.

    2008-01-01

    The study presents seasonal variability of currents in the southern part of the Gulf of Trieste. A time series analysis of currents and wind stress for the period 2003-2006, which were measured by the coastal oceanographic buoy, was conducted. A comparison between these data and results obtained from a numerical model of circulation in the Gulf was performed to validate model results. Three different approaches were applied to the wind data to determine the wind stress. Similarities were found between Kondo and Smith approaches while the method of Vera shows differences which were particularly noticeable for lower (= 1 m/s) and higher wind speeds (= 15 m/s). Mean currents in the surface layer are generally outflow currents from the Gulf due to wind forcing (bora). However in all other depth layers inflow currents are dominant. With the principal component analysis (Pca) major and minor axes were determined for all seasons. The major axis of maximum variance in years between 2003 and 2006 is prevailing in Ne-Sw direction, which is parallel to the coastline. Comparison of observation and model results is showing that currents are similar (in direction) for the surface and bottom layers but are significantly different for the middle layer (5-13 m). At a depth between 14-21 m velocities are comparable in direction as well as in magnitude even though model values are higher. Higher values of modelled currents at the surface and near the bottom are explained by higher values of wind stress that were used in the model as driving input with respect to the stress calculated from the measured winds. Larger values of modelled currents near the bottom are related to the larger inflow that needs to compensate for the larger modelled outflow at the surface. However, inspection of the vertical structure of temperature, salinity and density shows that the model is reproducing a weaker density gradient which enables the penetration of the outflow surface currents to larger depths.

  17. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  18. Melt coolability modeling and comparison to MACE test results

    International Nuclear Information System (INIS)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1992-01-01

    An important question in the assessment of severe accidents in light water nuclear reactors is the ability of water to quench a molten corium-concrete interaction and thereby terminate the accident progression. As part of the Melt Attack and Coolability Experiment (MACE) Program, phenomenological models of the corium quenching process are under development. The modeling approach considers both bulk cooldown and crust-limited heat transfer regimes, as well as criteria for the pool thermal hydraulic conditions which separate the two regimes. The model is then compared with results of the MACE experiments

  19. A model for hot electron phenomena: Theory and general results

    International Nuclear Information System (INIS)

    Carrillo, J.L.; Rodriquez, M.A.

    1988-10-01

    We propose a model for the description of the hot electron phenomena in semiconductors. Based on this model we are able to reproduce accurately the main characteristics observed in experiments of electric field transport, optical absorption, steady state photoluminescence and relaxation process. Our theory does not contain free nor adjustable parameters, it is very fast computerwise, and incorporates the main collision mechanisms including screening and phonon heating effects. Our description on a set of nonlinear rate equations in which the interactions are represented by coupling coefficients or effective frequencies. We calculate three coefficients from the characteristic constants and the band structure of the material. (author). 22 refs, 5 figs, 1 tab

  20. Results from Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, K.

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  1. Results From Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  2. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  3. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  4. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  5. The 3D Reference Earth Model: Status and Preliminary Results

    Science.gov (United States)

    Moulik, P.; Lekic, V.; Romanowicz, B. A.

    2017-12-01

    In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the

  6. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  7. Recent shell-model results for exotic nuclei

    Directory of Open Access Journals (Sweden)

    Utsuno Yusuke

    2014-03-01

    Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.

  8. Test results of the SMES model coil. Pulse performance

    International Nuclear Information System (INIS)

    Hamajima, Takataro; Shimada, Mamoru; Ono, Michitaka

    1998-01-01

    A model coil for superconducting magnetic energy storage (SMES model coil) has been developed to establish the component technologies needed for a small-scale 100 kWh SMES device. The SMES model coil was fabricated, and then performance tests were carried out in 1996. The coil was successfully charged up to around 30 kA and down to zero at the same ramp rate of magnetic field experienced in a 100 kWh SMES device. AC loss in the coil was measured by an enthalpy method as parameters of ramp rate and flat top current. The results were evaluated by an analysis and compared with short-sample test results. The measured hysteresis loss is in good agreement with that estimated from the short-sample results. It was found that the coupling loss of the coil consists of two major coupling time constants. One is a short time constant of about 200 ms, which is in agreement with the test results of a short real conductor. The other is a long time constant of about 30 s, which could not be expected from the short sample test results. (author)

  9. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  10. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  11. Agent Based Modelling for Social Simulation

    OpenAIRE

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...

  12. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    Science.gov (United States)

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  14. Modeling vertical loads in pools resulting from fluid injection

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena

  15. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  16. Risk-based technical specifications program: Site interview results

    International Nuclear Information System (INIS)

    Andre, G.R.; Baker, A.J.; Johnson, R.L.

    1991-08-01

    The Electric Power Research Institute and Pacific Gas and Electric Company are sponsoring a program directed at improving Technical Specifications using risk-based methods. The major objectives of the program are to develop risk-based approaches to improve Technical Specifications and to develop an Interactive Risk Advisor (IRA) prototype. The IRA is envisioned as an interactive system that is available to plant personnel to assist in controlling plant operation. Use of an IRA is viewed as a method to improve plant availability while maintaining or improving plant safety. In support of the program, interviews were conducted at several PWR and BWR plant sites, to elicit opinions and information concerning risk-based approaches to Technical Specifications and IRA requirements. This report presents the results of these interviews, including the functional requirements of an IRA. 2 refs., 6 figs., 2 tabs

  17. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  18. Recent results in mirror based high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Nielsen, Jakob Skov; Elvang, Mads

    2004-01-01

    In this paper, recent results in high power laser cutting, obtained in reseach and development projects are presented. Two types of mirror based focussing systems for laser cutting have been developed and applied in laser cutting studies on CO2-lasers up to 12 kW. In shipyard environment cutting...... speed increase relative to state-of-the-art cutting of over 100 % has been achieved....

  19. Some Results On The Modelling Of TSS Manufacturing Lines

    Directory of Open Access Journals (Sweden)

    Viorel MÎNZU

    2000-12-01

    Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.

  20. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  1. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  2. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  3. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    Energy Technology Data Exchange (ETDEWEB)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others

    2013-08-15

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  4. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    International Nuclear Information System (INIS)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.

    2013-08-01

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  5. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  6. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  7. Acoustic results of the Boeing model 360 whirl tower test

    Science.gov (United States)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  8. Exact results for the one dimensional asymmetric exclusion model

    International Nuclear Information System (INIS)

    Derrida, B.; Evans, M.R.; Pasquier, V.

    1993-01-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author)

  9. Review of Current Standard Model Results in ATLAS

    CERN Document Server

    Brandt, Gerhard; The ATLAS collaboration

    2018-01-01

    This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.

  10. Sensor-based interior modeling

    International Nuclear Information System (INIS)

    Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.

    1995-01-01

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation

  11. Core damage frequency perspectives based on IPE results

    International Nuclear Information System (INIS)

    Dingman, S.E.; Camp, A.L.; LaChance, J.L.; Drouin, M.T.

    1996-01-01

    In November 1988, the US Nuclear Regulatory Commission (NRC) issued Generic Letter 88-20 requesting that all licensees perform an individual Plant Examination (IPE) to identify any plant-specific vulnerability to severe accidents and report the results to the Commission. This paper provides perspectives gained from reviewing 75 Individual Plant Examination (IPE) submittals covering 108 nuclear power plant units. Variability both within and among reactor types is examined to provide perspectives regarding plant-specific design and operational features, and modeling assumptions that play a significant role in the estimates of core damage frequencies in the IPEs

  12. Clinical results of proton beam therapy for skull base chordoma

    International Nuclear Information System (INIS)

    Igaki, Hiroshi; Tokuuye, Koichi; Okumura, Toshiyuki; Sugahara, Shinji; Kagei, Kenji; Hata, Masaharu; Ohara, Kiyoshi; Hashimoto, Takayuki; Tsuboi, Koji; Takano, Shingo; Matsumura, Akira; Akine, Yasuyuki

    2004-01-01

    Purpose: To evaluate clinical results of proton beam therapy for patients with skull base chordoma. Methods and materials: Thirteen patients with skull base chordoma who were treated with proton beams with or without X-rays at the University of Tsukuba between 1989 and 2000 were retrospectively reviewed. A median total tumor dose of 72.0 Gy (range, 63.0-95.0 Gy) was delivered. The patients were followed for a median period of 69.3 months (range, 14.6-123.4 months). Results: The 5-year local control rate was 46.0%. Cause-specific, overall, and disease-free survival rates at 5 years were 72.2%, 66.7%, and 42.2%, respectively. The local control rate was higher, without statistical significance, for those with preoperative tumors <30 mL. Partial or subtotal tumor removal did not yield better local control rates than for patients who underwent biopsy only as the latest surgery. Conclusion: Proton beam therapy is effective for patients with skull base chordoma, especially for those with small tumors. For a patient with a tumor of <30 mL with no prior treatment, biopsy without tumor removal seems to be appropriate before proton beam therapy

  13. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The load history was obtained from an ICECO analysis, using the equations of state for the source and the water. A detailed check of this solution was made to assure that the derived loading did provide the correct input. The DYNAPCON code was then used for the analysis of the prestressed concrete containment model. This analysis required the simulation of prestressing and the response of the model to the applied transient load. The calculations correctly predict the magnitudes of displacements of the PCRV model. In addition, the displacement time histories obtained from the calculations reproduce the general features of the experimental records: the period elongation and amplitude increase as compared to an elastic solution, and also the absence of permanent displacement. However, the period still underestimates the experiment, while the amplitude is generally somewhat large

  14. Thermal-Chemical Model Of Subduction: Results And Tests

    Science.gov (United States)

    Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.

    2005-12-01

    Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.

  15. Differential Geometry Based Multiscale Models

    Science.gov (United States)

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  16. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  17. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. CIM5 Phase III base process development results

    International Nuclear Information System (INIS)

    Witt, D.C.

    2000-01-01

    Integrated Demonstration Runs for the Am/Cm vitrification process were initiated in the Coupled 5-inch Cylindrical Induction Melter (CIM5) on 11/30/98 and completed on 12/9/98. Four successful runs at 60 wt% lanthanide loading were completed which met or exceeded all established criteria. The operating parameters used in these runs established the base conditions for the 5-inch Cylindrical Induction Melter (CIM5) process and were summarized in the 5-inch CIM design basis, SRT-AMC-99-OO01. (1) In subsequent tests, a total of fourteen CIM5 runs were performed using various power inputs, ramp rates and target temperatures to define the preferred processing conditions (2) Process stability and process flexibility were the key criteria used in assessing the results for each run. A preferred set of operating parameters was defined for the CIM5 batch process and these conditions were used to generate a pre-programmed, automatic processing cycle that was used for the last six CIM.5 runs (3) These operational tests were successfully completed in the January-February time frame and were summarized in SRT-AMC-99-00584. The recommended set of operating conditions defined in Runs No.1 through No.14 was used as the starting point for further pilot system runs to determine the robustness of the process, evaluate a bubbler, and investigate off-normal conditions. CIM5 Phase III Runs No.15 through No.60 were conducted utilizing the pre-programmed, automatic processing cycle to investigate system performance. This report summarizes the results of these tests and provides a recommendation for the base process as well as a processing modification for minimizing volume expansions if americium and/or curium are subject to a thermal reduction reaction like cerium. This document summarizes the results of the base process development tests conducted in the Am/Cm Pilot Facility located in Building 672-T

  19. Loss of spent fuel pool cooling PRA: Model and results

    International Nuclear Information System (INIS)

    Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.

    1996-09-01

    This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible

  20. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  1. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  2. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  3. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  4. Introducing Waqf Based Takaful Model in India

    Directory of Open Access Journals (Sweden)

    Syed Ahmed Salman

    2014-03-01

    Full Text Available Objective – Waqf is a unique feature of the socioeconomic system of Islam in a multi- religious and developing country like India. India is a rich country with waqf assets. The history of waqf in India can be traced back to 800 years ago. Most of the researchers, suggest how waqf can be used a tool to mitigate the poverty of Muslims. India has the third highest Muslim population after Indonesia and Pakistan. However, the majority of Muslims belong to the low income group and they are in need of help. It is believed that waqf can be utilized for the betterment of Indian Muslim community. Among the available uses of waqf assets, the main objective of this paper is to introduce waqf based takaful model in India. In addition, how this proposed model can be adopted in India is highlighted.Methods – Library research is applied since this paper relies on secondary data by thoroughlyreviewing the most relevant literature.Result – India as a rich country with waqf assets should fully utilize the resources to help the Muslims through takaful.Conclusion – In this study, we have proposed waqf based takaful model with the combination of the concepts mudarabah and wakalah for India. We recommend this model based on the background of the  country and situations. Since we have not tested the viability of this model in India, future research should be continued on this testing.Keywords : Wakaf, Takaful, Kemiskinan dan India

  5. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  6. A satellite-based global landslide model

    Directory of Open Access Journals (Sweden)

    A. Farahmand

    2013-05-01

    Full Text Available Landslides are devastating phenomena that cause huge damage around the world. This paper presents a quasi-global landslide model derived using satellite precipitation data, land-use land cover maps, and 250 m topography information. This suggested landslide model is based on the Support Vector Machines (SVM, a machine learning algorithm. The National Aeronautics and Space Administration (NASA Goddard Space Flight Center (GSFC landslide inventory data is used as observations and reference data. In all, 70% of the data are used for model development and training, whereas 30% are used for validation and verification. The results of 100 random subsamples of available landslide observations revealed that the suggested landslide model can predict historical landslides reliably. The average error of 100 iterations of landslide prediction is estimated to be approximately 7%, while approximately 2% false landslide events are observed.

  7. Electrochemistry-based Battery Modeling for Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Kulkarni, Chetan Shrikant

    2013-01-01

    Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.

  8. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  9. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The tests are very similar to the series of tests made for the COVA experimental program, but the vessel here is the prestressed concrete container. (orig.)

  10. The physical model of a terraced plot: first results

    Science.gov (United States)

    Perlotto, Chiara; D'Agostino, Vincenzo; Buzzanca, Giacomo

    2017-04-01

    Terrace building have been expanded in the 19th century because of the increased demographic pressure and the need to crop additional areas at steeper slopes. Terraces are also important to regulate the hydrological behavior of the hillslope. Few studies are available in literature on rainfall-runoff processes and flood risk mitigation in terraced areas. Bench terraces, reducing the terrain slope and the length of the overland flow, quantitatively control the runoff flow velocity, facilitating the drainage and thus leading to a reduction of soil erosion. The study of the hydrologic-hydraulic function of terraced slopes is essential in order to evaluate their possible use to cooperate for flood-risk mitigation also preserving the landscape value. This research aims to better focus the times of the hydrological response, which are determined by a hillslope plot bounded by a dry-stone wall, considering both the overland flow and the groundwater. A physical model, characterized by a quasi-real scale, has been built to reproduce the behavior of a 3% outward sloped terrace at bare soil condition. The model consists of a steel metal box (1 m large, 3.3 m long, 2 m high) containing the hillslope terrain. The terrain is equipped with two piezometers, 9 TDR sensors measuring the volumetric water content, a surface spillway at the head releasing the steady discharge under test, a scale at the wall base to measure the outflowing discharge. The experiments deal with different initial moisture condition (non-saturated and saturated), and discharges of 19.5, 12.0 and 5.0 l/min. Each experiment has been replicated, conducting a total number of 12 tests. The volumetric water content analysis produced by the 9 TDR sensors was able to provide a quite satisfactory representation of the soil moisture during the runs. Then, different lag times at the outlet since the inflow initiation were measured both for runoff and groundwater. Moreover, the time of depletion and the piezometer

  11. Intelligent-based Structural Damage Detection Model

    International Nuclear Information System (INIS)

    Lee, Eric Wai Ming; Yu, K.F.

    2010-01-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  12. Intelligent-based Structural Damage Detection Model

    Science.gov (United States)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  13. Model-Based Power Plant Master Control

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Katarina; Thomas, Jean; Funkquist, Jonas

    2010-08-15

    The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are

  14. PARTICIPATION BASED MODEL OF SHIP CREW MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Toni Bielić

    2014-10-01

    Full Text Available 800x600 This paper analyse the participation - based model on board the ship as possibly optimal leadership model existing in the shipping industry with accent on decision - making process. In the paper authors have tried to define master’s behaviour model and management style identifying drawbacks and disadvantages of vertical, pyramidal organization with master on the top. Paper describes efficiency of decision making within team organization and optimization of a ship’s organisation by introducing teamwork on board the ship. Three examples of the ship’s accidents are studied and evaluated through “Leader - participation” model. The model of participation based management as a model of the teamwork has been applied in studying the cause - and - effect of accidents with the critical review of the communication and managing the human resources on a ship. The results have showed that the cause of all three accidents is the autocratic behaviour of the leaders and lack of communication within teams. Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4

  15. Portfolio Effects of Renewable Energies - Basics, Models, Exemplary Results

    Energy Technology Data Exchange (ETDEWEB)

    Wiese, Andreas; Herrmann, Matthias

    2007-07-01

    The combination of sites and technologies to so-called renewable energy portfolios, which are being developed and implemented under the same financing umbrella, is currently the subject of intense discussion in the finance world. The resulting portfolio effect may allow the prediction of a higher return with the same risk or the same return with a lower risk - always in comparison with the investment in a single project. Models are currently being developed to analyse this subject and derive the portfolio effect. In particular, the effect of the spatial distribution, as well as the effects of using different technologies, suppliers and cost assumptions with different level of uncertainties, are of importance. Wind parks, photovoltaic, biomass, biogas and hydropower are being considered. The status of the model development and first results are being presented in the current paper. In a first example, the portfolio effect has been calculated and analysed using selected parameters for a wind energy portfolio of 39 sites distributed over Europe. Consequently it has been shown that the predicted yield, with the predetermined probabilities between 75 to 90%, is 3 - 8% higher than the sum of the yields for the individual wind parks using the same probabilities. (auth)

  16. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  17. Some exact results for the three-layer Zamolodchikov model

    International Nuclear Information System (INIS)

    Boos, H.E.; Mangazeev, V.V.

    2001-01-01

    In this paper we continue the study of the three-layer Zamolodchikov model started in our previous works (H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 3041-3054 and H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298). We analyse numerically the solutions to the Bethe ansatz equations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We consider two regimes I and II which differ by the signs of the spherical sides (a 1 ,a 2 ,a 3 )→(-a 1 ,-a 2 ,-a 3 ). We accept the two-line hypothesis for the regime I and the one-line hypothesis for the regime II. In the thermodynamic limit we derive integral equations for distribution densities and solve them exactly. We calculate the partition function for the three-layer Zamolodchikov model and check a compatibility of this result with the functional relations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We also do some numeric checkings of our results

  18. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  19. New global ICT-based business models

    DEFF Research Database (Denmark)

    The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative ...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative....... The NEWGIBM book serves as a part of the final evaluation and documentation of the NEWGIBM project and is supported by results from the following projects: M-commerce, Global Innovation, Global Ebusiness & M-commerce, The Blue Ocean project, International Center for Innovation and Women in Business, NEFFICS...

  20. Cloud model construct for transaction-based cooperative systems ...

    African Journals Online (AJOL)

    Cloud model construct for transaction-based cooperative systems. ... procure cutting edge Information Technology infrastructure are some of the problems faced ... Results also reveal that credit cooperatives will benefit from the model by taking ...

  1. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  2. Results-Based Organization Design for Technology Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Chris McPhee

    2012-05-01

    Full Text Available Faced with considerable uncertainty, entrepreneurs would benefit from clearly defined objectives, a plan to achieve these objectives (including a reasonable expectation that this plan will work, as well as a means to measure progress and make requisite course corrections. In this article, the author combines the benefits of results-based management with the benefits of organization design to describe a practical approach that technology entrepreneurs can use to design their organizations so that they deliver desired outcomes. This approach links insights from theory and practice, builds logical connections between entrepreneurial activities and desired outcomes, and measures progress toward those outcomes. This approach also provides a mechanism for entrepreneurs to make continual adjustments and improvements to their design and direction in response to data, customer and stakeholder feedback, and changes in their business environment.

  3. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  4. Graph and model transformation tools for model migration : empirical results from the transformation tool contest

    NARCIS (Netherlands)

    Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.

    2014-01-01

    We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically

  5. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  6. On Process Modelling Using Physical Oriented And Phenomena Based Principles

    Directory of Open Access Journals (Sweden)

    Mihai Culea

    2000-12-01

    Full Text Available This work presents a modelling framework based on phenomena description of the process. The approach is taken to easy understand and construct process model in heterogeneous possible distributed modelling and simulation environments. A simplified case study of a heat exchanger is considered and Modelica modelling language to check the proposed concept. The partial results are promising and the research effort will be extended in a computer aided modelling environment based on phenomena.

  7. Process-Based Modeling of Constructed Wetlands

    Science.gov (United States)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  8. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  9. Tapering of the CHESS-APS undulator: Results and modelling

    International Nuclear Information System (INIS)

    Lai, B.; Viccaro, P.J.; Dejus, R.; Gluskin, E.; Yun, W.B.; McNulty, I.; Henderson, C.; White, J.; Shen, Q.; Finkelstein, K.

    1992-01-01

    When the magnetic gap of an undulator is tapered along the beam direction, the slowly varying peak field B o introduces a spread in the value of the deflection parameter K. The result is a broad energy-band undulator that still maintains high degree of spatial collimation. These properties are very useful for EXAFS and energy dispersive techniques. We have characterized the CHESS-APS undulator (1 υ = 3.3cm) at one tapered configuration (10% change of the magnetic gap from one end of the undulator to the other). Spatial distribution and energy spectra of the first three harmonics through a pinhole were measured. The on-axis first harmonic width increased from 0.27 keV to 0.61 keV (FWHM) at the central energy of E 1 = 6.6 keV (K average = 0.69). Broadening in the angular distribution due to tapering was minimal. These results will be compared with computer modelling which simulates the actual electron trajectory in the tapered case

  10. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  11. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  12. Model-based DSL frameworks

    NARCIS (Netherlands)

    Ivanov, Ivan; Bézivin, J.; Jouault, F.; Valduriez, P.

    2006-01-01

    More than five years ago, the OMG proposed the Model Driven Architecture (MDA™) approach to deal with the separation of platform dependent and independent aspects in information systems. Since then, the initial idea of MDA evolved and Model Driven Engineering (MDE) is being increasingly promoted to

  13. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    International Nuclear Information System (INIS)

    Chiara, P.; Morelli, A.

    2010-01-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements.Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken.This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  14. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    Science.gov (United States)

    Chiara, P.; Morelli, A.

    2010-05-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements. Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken. This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  15. The design, results and future development of the National Energy Strategy Environmental Analysis Model (NESEAM)

    International Nuclear Information System (INIS)

    Fisher, R.E.; Boyd, G.A.; Breed, W.S.

    1991-01-01

    The National Energy Strategy Environmental Model (NESEAM) has been developed to project emissions for the National Energy Strategy (NES). Two scenarios were evaluated for the NES, a Current Policy Base Case and a NES Action Case. The results from the NES Actions Case project much lower emissions than the Current Policy Base Case. Future enhancements to NESEAM will focus on fuel cycle analysis, including future technologies and additional pollutants to model. NESEAM's flexibility will allow it to model other future legislative issues. 7 refs., 4 figs., 2 tabs

  16. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  17. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  18. Demixing in a metal halide lamp, results from modelling

    NARCIS (Netherlands)

    Beks, M.L.; Hartgers, A.; Mullen, van der J.J.A.M.

    2006-01-01

    Convection and diffusion in the discharge region of a metal halide lamp is studied using a computer model built with the plasma modeling package Plasimo. A model lamp contg. mercury and sodium iodide is studied. The effects of the total lamp pressure on the degree of segregation of the light

  19. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  20. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  1. Uncertainty in parameterisation and model structure affect simulation results in coupled ecohydrological models

    Directory of Open Access Journals (Sweden)

    S. Arnold

    2009-10-01

    Full Text Available In this paper we develop and apply a conceptual ecohydrological model to investigate the effects of model structure and parameter uncertainty on the simulation of vegetation structure and hydrological dynamics. The model is applied for a typical water limited riparian ecosystem along an ephemeral river: the middle section of the Kuiseb River in Namibia. We modelled this system by coupling an ecological model with a conceptual hydrological model. The hydrological model is storage based with stochastical forcing from the flood. The ecosystem is modelled with a population model, and represents three dominating riparian plant populations. In appreciation of uncertainty about population dynamics, we applied three model versions with increasing complexity. Population parameters were found by Latin hypercube sampling of the parameter space and with the constraint that three species should coexist as observed. Two of the three models were able to reproduce the observed coexistence. However, both models relied on different coexistence mechanisms, and reacted differently to change of long term memory in the flood forcing. The coexistence requirement strongly constrained the parameter space for both successful models. Only very few parameter sets (0.5% of 150 000 samples allowed for coexistence in a representative number of repeated simulations (at least 10 out of 100 and the success of the coexistence mechanism was controlled by the combination of population parameters. The ensemble statistics of average values of hydrologic variables like transpiration and depth to ground water were similar for both models, suggesting that they were mainly controlled by the applied hydrological model. The ensemble statistics of the fluctuations of depth to groundwater and transpiration, however, differed significantly, suggesting that they were controlled by the applied ecological model and coexistence mechanisms. Our study emphasizes that uncertainty about ecosystem

  2. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  3. Models for Rational Number Bases

    Science.gov (United States)

    Pedersen, Jean J.; Armbruster, Frank O.

    1975-01-01

    This article extends number bases to negative integers, then to positive rationals and finally to negative rationals. Methods and rules for operations in positive and negative rational bases greater than one or less than negative one are summarized in tables. Sample problems are explained and illustrated. (KM)

  4. Waste glass corrosion modeling: Comparison with experimental results

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-01-01

    Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations

  5. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    Science.gov (United States)

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  6. Néron models and base change

    CERN Document Server

    Halle, Lars Halvard

    2016-01-01

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions of abelian varieties. The final chapter contains a list of challenging open questions. This book is a...

  7. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  8. Argonne Fuel Cycle Facility ventilation system -- modeling and results

    International Nuclear Information System (INIS)

    Mohr, D.; Feldman, E.E.; Danielson, W.F.

    1995-01-01

    This paper describes an integrated study of the Argonne-West Fuel Cycle Facility (FCF) interconnected ventilation systems during various operations. Analyses and test results include first a nominal condition reflecting balanced pressures and flows followed by several infrequent and off-normal scenarios. This effort is the first study of the FCF ventilation systems as an integrated network wherein the hydraulic effects of all major air systems have been analyzed and tested. The FCF building consists of many interconnected regions in which nuclear fuel is handled, transported and reprocessed. The ventilation systems comprise a large number of ducts, fans, dampers, and filters which together must provide clean, properly conditioned air to the worker occupied spaces of the facility while preventing the spread of airborne radioactive materials to clean am-as or the atmosphere. This objective is achieved by keeping the FCF building at a partial vacuum in which the contaminated areas are kept at lower pressures than the other worker occupied spaces. The ventilation systems of FCF and the EBR-II reactor are analyzed as an integrated totality, as demonstrated. We then developed the network model shown in Fig. 2 for the TORAC code. The scope of this study was to assess the measured results from the acceptance/flow balancing testing and to predict the effects of power failures, hatch and door openings, single-failure faulted conditions, EBR-II isolation, and other infrequent operations. The studies show that the FCF ventilation systems am very controllable and remain stable following off-normal events. In addition, the FCF ventilation system complex is essentially immune to reverse flows and spread of contamination to clean areas during normal and off-normal operation

  9. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  10. Innovation ecosystem model for commercialization of research results

    Directory of Open Access Journals (Sweden)

    Vlăduţ Gabriel

    2017-07-01

    Full Text Available Innovation means Creativity and Added value recognise by the market. The first step in creating a sustainable commercialization of research results, Technological Transfer – TT mechanism, on one hand is to define the “technology” which will be transferred and on other hand to define the context in which the TT mechanism work, the ecosystem. The focus must be set on technology as an entity, not as a science or a study of the practical industrial arts and certainly not any specific applied science. The transfer object, the technology, must rely on a subjectively determined but specifiable set of processes and products. Focusing on the product is not sufficient to the transfer and diffusion of technology. It is not merely the product that is transferred but also knowledge of its use and application. The innovation ecosystem model brings together new companies, experienced business leaders, researchers, government officials, established technology companies, and investors. This environment provides those new companies with a wealth of technical expertise, business experience, and access to capital that supports innovation in the early stages of growth.

  11. Model based risk assessment - the CORAS framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune; Thunem, Atoosa P-J.

    2004-04-15

    Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

  12. A computer-based measure of resultant achievement motivation.

    Science.gov (United States)

    Blankenship, V

    1987-08-01

    Three experiments were conducted to develop a computer-based measure of individual differences in resultant achievement motivation (RAM) on the basis of level-of-aspiration, achievement motivation, and dynamics-of-action theories. In Experiment 1, the number of atypical shifts and greater responsiveness to incentives on 21 trials with choices among easy, intermediate, and difficult levels of an achievement-oriented game were positively correlated and were found to differentiate the 62 subjects (31 men, 31 women) on the amount of time they spent at a nonachievement task (watching a color design) 1 week later. In Experiment 2, test-retest reliability was established with the use of 67 subjects (15 men, 52 women). Point and no-point trials were offered in blocks, with point trials first for half the subjects and no-point trials first for the other half. Reliability was higher for the atypical-shift measure than for the incentive-responsiveness measure and was higher when points were offered first. In Experiment 3, computer anxiety was manipulated by creating a simulated computer breakdown in the experimental condition. Fifty-nine subjects (13 men, 46 women) were randomly assigned to the experimental condition or to one of two control conditions (an interruption condition and a no-interruption condition). Subjects with low RAM, as demonstrated by a low number of typical shifts, took longer to choose the achievement-oriented task, as predicted by the dynamics-of-action theory. The difference was evident in all conditions and most striking in the computer-breakdown condition. A change of focus from atypical to typical shifts is discussed.

  13. Some important results from the air pollution distribution model STACKS (1988-1992)

    International Nuclear Information System (INIS)

    Erbrink, J.J.

    1993-01-01

    Attention is paid to the results of the study on the distribution of air pollutants by high chimney-stacks of electric power plants. An important product of the study is the integrated distribution model STACKS (Short Term Air-pollutant Concentrations Kema modelling System). The improvements and the extensions of STACKS are described in relation to the National Model, which has been used to estimate the environmental effects of individual chimney-stacks. The National Model shows unacceptable variations for high pollutant sources. Based on the results of STACKS revision of the National model has been taken into consideration. By means of the revised National Model a more realistic estimation of the environmental effects of electric power plants can be carried out

  14. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell....... Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested....

  15. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  16. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  17. gis-based hydrological model based hydrological model upstream

    African Journals Online (AJOL)

    eobe

    (GIS) environment to simulate various parame attributed to a ... water, sediment and agricultural chem large complex .... The 90m resolution topography data (see Figure 2) used for this .... results of the calibration and validation exercise are as.

  18. Model-based explanation of plant knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Huuskonen, P.J. [VTT Electronics, Oulu (Finland). Embedded Software

    1997-12-31

    This thesis deals with computer explanation of knowledge related to design and operation of industrial plants. The needs for explanation are motivated through case studies and literature reviews. A general framework for analysing plant explanations is presented. Prototypes demonstrate key mechanisms for implementing parts of the framework. Power plants, steel mills, paper factories, and high energy physics control systems are studied to set requirements for explanation. The main problems are seen to be either lack or abundance of information. Design knowledge in particular is found missing at plants. Support systems and automation should be enhanced with ways to explain plant knowledge to the plant staff. A framework is formulated for analysing explanations of plant knowledge. It consists of three parts: 1. a typology of explanation, organised by the class of knowledge (factual, functional, or strategic) and by the target of explanation (processes, automation, or support systems), 2. an identification of explanation tasks generic for the plant domain, and 3. an identification of essential model types for explanation (structural, behavioural, functional, and teleological). The tasks use the models to create the explanations of the given classes. Key mechanisms are discussed to implement the generic explanation tasks. Knowledge representations based on objects and their relations form a vocabulary to model and present plant knowledge. A particular class of models, means-end models, are used to explain plant knowledge. Explanations are generated through searches in the models. Hypertext is adopted to communicate explanations over dialogue based on context. The results are demonstrated in prototypes. The VICE prototype explains the reasoning of an expert system for diagnosis of rotating machines at power plants. The Justifier prototype explains design knowledge obtained from an object-oriented plant design tool. Enhanced access mechanisms into on-line documentation are

  19. Model-based explanation of plant knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Huuskonen, P J [VTT Electronics, Oulu (Finland). Embedded Software

    1998-12-31

    This thesis deals with computer explanation of knowledge related to design and operation of industrial plants. The needs for explanation are motivated through case studies and literature reviews. A general framework for analysing plant explanations is presented. Prototypes demonstrate key mechanisms for implementing parts of the framework. Power plants, steel mills, paper factories, and high energy physics control systems are studied to set requirements for explanation. The main problems are seen to be either lack or abundance of information. Design knowledge in particular is found missing at plants. Support systems and automation should be enhanced with ways to explain plant knowledge to the plant staff. A framework is formulated for analysing explanations of plant knowledge. It consists of three parts: 1. a typology of explanation, organised by the class of knowledge (factual, functional, or strategic) and by the target of explanation (processes, automation, or support systems), 2. an identification of explanation tasks generic for the plant domain, and 3. an identification of essential model types for explanation (structural, behavioural, functional, and teleological). The tasks use the models to create the explanations of the given classes. Key mechanisms are discussed to implement the generic explanation tasks. Knowledge representations based on objects and their relations form a vocabulary to model and present plant knowledge. A particular class of models, means-end models, are used to explain plant knowledge. Explanations are generated through searches in the models. Hypertext is adopted to communicate explanations over dialogue based on context. The results are demonstrated in prototypes. The VICE prototype explains the reasoning of an expert system for diagnosis of rotating machines at power plants. The Justifier prototype explains design knowledge obtained from an object-oriented plant design tool. Enhanced access mechanisms into on-line documentation are

  20. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  1. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  2. Waste glass corrosion modeling: Comparison with experimental results

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1993-11-01

    A chemical model of glass corrosion will be used to predict the rates of release of radionuclides from borosilicate glass waste forms in high-level waste repositories. The model will be used both to calculate the rate of degradation of the glass, and also to predict the effects of chemical interactions between the glass and repository materials such as spent fuel, canister and container materials, backfill, cements, grouts, and others. Coupling between the degradation processes affecting all these materials is expected. Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations

  3. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F.

    1999-07-01

    A dynamical downscaling is presented that allows an estimation of potential effects of climate change on the North Sea. Therefore, the ocean general circulation model OPYC is adapted for application on a shelf by adding a lateral boundary formulation and a tide model. In this set-up the model is forced, first, with data from the ECMWF reanalysis for model validation and the study of the natural variability, and, second, with data from climate change experiments to estimate the effects of climate change on the North Sea. (orig.)

  4. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  5. Particle-based model for skiing traffic.

    Science.gov (United States)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  6. Pile Design Based on Cone Penetration Test Results

    OpenAIRE

    Salgado, Rodrigo; Lee, Junhwan

    1999-01-01

    The bearing capacity of piles consists of both base resistance and side resistance. The side resistance of piles is in most cases fully mobilized well before the maximum base resistance is reached. As the side resistance is mobilized early in the loading process, the determination of pile base resistance is a key element of pile design. Static cone penetration is well related to the pile loading process, since it is performed quasi-statically and resembles a scaled-down pile load test. In ord...

  7. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  8. Spinal cord stimulation: modeling results and clinical data

    NARCIS (Netherlands)

    Struijk, Johannes J.; Struijk, J.J.; Holsheimer, J.; Barolat, Giancarlo; He, Jiping

    1992-01-01

    The potential distribution in volume couductor models of the spinal cord at cervical, midthoracic and lowthoracic levels, due to epidural stimulation, was calculated. Treshold stimuli of modeled myelhated dorsal column and dorsal root fibers were calculated and were compared with perception

  9. How to: understanding SWAT model uncertainty relative to measured results

    Science.gov (United States)

    Watershed models are being relied upon to contribute to most policy-making decisions of watershed management, and the demand for an accurate accounting of complete model uncertainty is rising. Generalized likelihood uncertainty estimation (GLUE) is a widely used method for quantifying uncertainty i...

  10. Noise and dose modeling for pediatric CT optimization: preliminary results

    International Nuclear Information System (INIS)

    Miller Clemente, Rafael A.; Perez Diaz, Marlen; Mora Reyes, Yudel; Rodriguez Garlobo, Maikel; Castillo Salazar, Rafael

    2008-01-01

    Full text: A Multiple Linear Regression Model was developed to predict noise and dose in computed tomography pediatric imaging for head and abdominal examinations. Relative values of Noise and Volumetric Computed Tomography Dose Index was used to estimate de model respectively. 54 images of physical phantoms were performed. Independent variables considered included: phantom diameter, tube current and kilovolts, x ray beam collimation, reconstruction diameter and equipment's post processing filters. Predicted values show good agreement with measurements, which were better in noise model (R 2 adjusted =0.953) than the dose model (R 2 adjusted =0.744). Tube current, object diameter, beam collimation and reconstruction filter were identified as the most influencing factors in models. (author)

  11. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  12. Family-based hip-hop to health: outcome results.

    Science.gov (United States)

    Fitzgibbon, Marian L; Stolley, Melinda R; Schiffer, Linda; Kong, Angela; Braunschweig, Carol L; Gomez-Perez, Sandra L; Odoms-Young, Angela; Van Horn, Linda; Christoffel, Katherine Kaufer; Dyer, Alan R

    2013-02-01

    This pilot study tested the feasibility of Family-Based Hip-Hop to Health, a school-based obesity prevention intervention for 3-5-year-old Latino children and their parents, and estimated its effectiveness in producing smaller average changes in BMI at 1-year follow-up. Four Head Start preschools administered through the Chicago Public Schools were randomly assigned to receive a Family-Based Intervention (FBI) or a General Health Intervention (GHI). Parents signed consent forms for 147 of the 157 children enrolled. Both the school-based and family-based components of the intervention were feasible, but attendance for the parent intervention sessions was low. Contrary to expectations, a downtrend in BMI Z-score was observed in both the intervention and control groups. While the data reflect a downward trend in obesity among these young Hispanic children, obesity rates remained higher at 1-year follow-up (15%) than those reported by the National Health and Nutrition Examination Survey (2009-2010) for 2-5-year-old children (12.1%). Developing evidence-based strategies for obesity prevention among Hispanic families remains a challenge. Copyright © 2012 The Obesity Society.

  13. Exoplanets -New Results from Space and Ground-based Surveys

    Science.gov (United States)

    Udry, Stephane

    The exploration of the outer solar system and in particular of the giant planets and their environments is an on-going process with the Cassini spacecraft currently around Saturn, the Juno mission to Jupiter preparing to depart and two large future space missions planned to launch in the 2020-2025 time frame for the Jupiter system and its satellites (Europa and Ganymede) on the one hand, and the Saturnian system and Titan on the other hand [1,2]. Titan, Saturn's largest satellite, is the only other object in our Solar system to possess an extensive nitrogen atmosphere, host to an active organic chemistry, based on the interaction of N2 with methane (CH4). Following the Voyager flyby in 1980, Titan has been intensely studied from the ground-based large telescopes (such as the Keck or the VLT) and by artificial satellites (such as the Infrared Space Observatory and the Hubble Space Telescope) for the past three decades. Prior to Cassini-Huygens, Titan's atmospheric composition was thus known to us from the Voyager missions and also through the explorations by the ISO. Our perception of Titan had thus greatly been enhanced accordingly, but many questions remained as to the nature of the haze surrounding the satellite and the composition of the surface. The recent revelations by the Cassini-Huygens mission have managed to surprise us with many discoveries [3-8] and have yet to reveal more of the interesting aspects of the satellite. The Cassini-Huygens mission to the Saturnian system has been an extraordinary success for the planetary community since the Saturn-Orbit-Insertion (SOI) in July 2004 and again the very successful probe descent and landing of Huygens on January 14, 2005. One of its main targets was Titan. Titan was revealed to be a complex world more like the Earth than any other: it has a dense mostly nitrogen atmosphere and active climate and meteorological cycles where the working fluid, methane, behaves under Titan conditions the way that water does on

  14. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  15. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  16. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario....

  17. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  18. Box photosynthesis modeling results for WRF/CMAQ LSM

    Data.gov (United States)

    U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....

  19. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...

  20. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  1. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  2. The animal model determines the results of Aeromonas virulence factors

    Directory of Open Access Journals (Sweden)

    Alejandro Romero

    2016-10-01

    Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study

  3. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  4. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    Science.gov (United States)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  5. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  6. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  7. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  8. Mars 2020 Model Based Systems Engineering Pilot

    Science.gov (United States)

    Dukes, Alexandra Marie

    2017-01-01

    The pilot study is led by the Integration Engineering group in NASA's Launch Services Program (LSP). The Integration Engineering (IE) group is responsible for managing the interfaces between the spacecraft and launch vehicle. This pilot investigates the utility of Model-Based Systems Engineering (MBSE) with respect to managing and verifying interface requirements. The main objectives of the pilot are to model several key aspects of the Mars 2020 integrated operations and interface requirements based on the design and verification artifacts from Mars Science Laboratory (MSL) and to demonstrate how MBSE could be used by LSP to gain further insight on the interface between the spacecraft and launch vehicle as well as to enhance how LSP manages the launch service. The method used to accomplish this pilot started through familiarization of SysML, MagicDraw, and the Mars 2020 and MSL systems through books, tutorials, and NASA documentation. MSL was chosen as the focus of the model since its processes and verifications translate easily to the Mars 2020 mission. The study was further focused by modeling specialized systems and processes within MSL in order to demonstrate the utility of MBSE for the rest of the mission. The systems chosen were the In-Flight Disconnect (IFD) system and the Mass Properties process. The IFD was chosen as a system of focus since it is an interface between the spacecraft and launch vehicle which can demonstrate the usefulness of MBSE from a system perspective. The Mass Properties process was chosen as a process of focus since the verifications for mass properties occur throughout the lifecycle and can demonstrate the usefulness of MBSE from a multi-discipline perspective. Several iterations of both perspectives have been modeled and evaluated. While the pilot study will continue for another 2 weeks, pros and cons of using MBSE for LSP IE have been identified. A pro of using MBSE includes an integrated view of the disciplines, requirements, and

  9. Model-Based Learning Environment Based on The Concept IPS School-Based Management

    Directory of Open Access Journals (Sweden)

    Hamid Darmadi

    2017-03-01

    Full Text Available The results showed: (1 learning model IPS-oriented environment can grow and not you love the cultural values of the area as a basis for the development of national culture, (2 community participation, and the role of government in implementing learning model of IPS-based environment provides a positive impact for the improvement of management school resources, (3 learning model IPS-based environment effectively creating a way of life together peacefully, increase the intensity of togetherness and mutual respect (4 learning model IPS-based environment can improve student learning outcomes, (5 there are differences in the expression of attitudes and results learning among students who are located in the area of conflict with students who are outside the area of conflict (6 analysis of the scale of attitudes among school students da SMA result rewards high school students to the values of unity and nation, respect for diversity and peaceful coexistence, It is recommended that the Department of Education authority as an institution of Trustees and the development of social and cultural values in the province can apply IPS learning model based environments.

  10. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  11. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  12. Opinion dynamics model based on quantum formalism

    Energy Technology Data Exchange (ETDEWEB)

    Artawan, I. Nengah, E-mail: nengahartawan@gmail.com [Theoretical Physics Division, Department of Physics, Udayana University (Indonesia); Trisnawati, N. L. P., E-mail: nlptrisnawati@gmail.com [Biophysics, Department of Physics, Udayana University (Indonesia)

    2016-03-11

    Opinion dynamics model based on quantum formalism is proposed. The core of the quantum formalism is on the half spin dynamics system. In this research the implicit time evolution operators are derived. The analogy between the model with Deffuant dan Sznajd models is discussed.

  13. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  14. A MYSQL-BASED DATA ARCHIVER: PRELIMINARY RESULTS

    International Nuclear Information System (INIS)

    Matthew Bickley; Christopher Slominski

    2008-01-01

    Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator's user community, a prototyping effort was executed to determine if an archiver based on MySQL had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisition and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and development of an operational system. The system is in its operational testing phase now. This paper discusses the archiver system requirements, some of the design choices and their rationale, and presents the acquisition, storage and retrieval performance

  15. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  16. Modeling the radiation transfer of discontinuous canopies: results for gap probability and single-scattering contribution

    Science.gov (United States)

    Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun

    2010-10-01

    In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.

  17. Performance Results of CMMI-Based Process Improvement

    National Research Council Canada - National Science Library

    Gibson, Diane L; Goldenson, Dennis R; Kost, Keith

    2006-01-01

    .... There now is evidence that process improvement using the CMMI Product Suite can result in improvements in schedule and cost performance, product quality, return on investment and other measures of performance outcome...

  18. Core damage frequency (reactor design) perspectives based on IPE results

    International Nuclear Information System (INIS)

    Camp, A.L.; Dingman, S.E.; Forester, J.A.

    1996-01-01

    This paper provides perspectives gained from reviewing 75 Individual Plant Examination (IPE) submittals covering 108 nuclear power plant units. Variability both within and among reactor types is examined to provide perspectives regarding plant-specific design and operational features, and C, modeling assumptions that play a significant role in the estimates of core damage frequencies in the IPEs. Human actions found to be important in boiling water reactors (BWRs) and in pressurized water reactors (PWRs) are presented and the events most frequently found important are discussed

  19. Route constraints model based on polychromatic sets

    Science.gov (United States)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  20. Analytical results for the Sznajd model of opinion formation

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Lavička, H.

    2003-01-01

    Roč. 35, - (2003), s. 279-288 ISSN 1434-6028 R&D Projects: GA ČR GA202/01/1091 Institutional research plan: CEZ:AV0Z1010914 Keywords : agent models * sociophysics Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2003

  1. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely’ di...

  2. Recent numerical results on the two dimensional Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E. (SISSA, Trieste (Italy))

    1989-12-01

    A new method for simulating strongly correlated fermionic systems, has been applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution. (orig.).

  3. Recent numerical results on the two dimensional Hubbard model

    International Nuclear Information System (INIS)

    Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E.

    1989-01-01

    This paper reports a new method for simulating strongly correlated fermionic systems applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution

  4. Some rigorous results on the Hopfield neural network model

    International Nuclear Information System (INIS)

    Koch, H.; Piasko, J.

    1989-01-01

    The authors analyze the thermal equilibrium distribution of 2 p mean field variables for the Hopfield model with p stored patterns, in the case where 2 p is small compared to the number of spins. In particular, they give a full description of the free energy density in the thermodynamic limit, and of the so-called symmetric solutions for the mean field equations

  5. A computer model to forecast wetland vegetation changes resulting from restoration and protection in coastal Louisiana

    Science.gov (United States)

    Visser, Jenneke M.; Duke-Sylvester, Scott M.; Carter, Jacoby; Broussard, Whitney P.

    2013-01-01

    The coastal wetlands of Louisiana are a unique ecosystem that supports a diversity of wildlife as well as a diverse community of commercial interests of both local and national importance. The state of Louisiana has established a 5-year cycle of scientific investigation to provide up-to-date information to guide future legislation and regulation aimed at preserving this critical ecosystem. Here we report on a model that projects changes in plant community distribution and composition in response to environmental conditions. This model is linked to a suite of other models and requires input from those that simulate the hydrology and morphology of coastal Louisiana. Collectively, these models are used to assess how alternative management plans may affect the wetland ecosystem through explicit spatial modeling of the physical and biological processes affected by proposed modifications to the ecosystem. We have also taken the opportunity to advance the state-of-the-art in wetland plant community modeling by using a model that is more species-based in its description of plant communities instead of one based on aggregated community types such as brackish marsh and saline marsh. The resulting model provides an increased level of ecological detail about how wetland communities are expected to respond. In addition, the output from this model provides critical inputs for estimating the effects of management on higher trophic level species though a more complete description of the shifts in habitat.

  6. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  7. Characteristic-Based, Task-Based, and Results-Based: Three Value Systems for Assessing Professionally Produced Technical Communication Products.

    Science.gov (United States)

    Carliner, Saul

    2003-01-01

    Notes that technical communicators have developed different methodologies for evaluating the effectiveness of their work, such as editing, usability testing, and determining the value added. Explains that at least three broad value systems underlie the assessment practices: characteristic-based, task-based, and results-based. Concludes that the…

  8. Corium phase equilibria based on MASCA, METCOR and CORPHAD results

    Energy Technology Data Exchange (ETDEWEB)

    Bechta, S.V.; Granovsky, V.S.; Khabensky, V.B. [Alexandrov Research Institute of Technologies (NITI), Sosnovy Bor (Russian Federation); Gusarov, V.V.; Almiashev, V.I.; Mezentseva, L.P. [Grebenshikov Institute of Silicate Chemistry, Russian Academy of Sciences (ISCh RAS), St. Petersburg (Russian Federation); Krushinov, E.V.; Kotova, S.Yu.; Kosarevsky, R.A. [Alexandrov Research Institute of Technologies (NITI), Sosnovy Bor (Russian Federation); Barrachin, M. [Institut de Radioprotection et Surete Nucleaire IRSN/DPAM, St Paul lez Durance (France); Bottomley, D. [EUROPAISCHE KOMMISSION, Joint Research Centre Institut fuer Transurane (ITU), Karlsruhe (Germany); Fichot, F. [Institut de Radioprotection et Surete Nucleaire IRSN/DPAM, St Paul lez Durance (France); Fischer, M. [AREVA NP GmbH, Erlangen (Germany)], E-mail: Manfred.Fischer@areva.com

    2008-10-15

    Experimental data on component partitioning between suboxidized corium melt and steel in the in-vessel melt retention (IVR) conditions are compared. The data are produced within the OECD MASCA program and the ISTC CORPHAD project under close-to-isothermal conditions and in the ISTC METCOR project under thermal gradient conditions. Chemical equilibrium in the U-Zr-Fe(Cr,Ni,...)-O system is reached in all experiments. In MASCA tests the molten pool formed under inert atmosphere has two immiscible liquids, oxygen-enriched (oxidic) and oxygen-depleted (metallic), resulting of the miscibility gap of the mentioned system. Sub-system data of the U-Zr-Fe(Cr,Ni,...)-O phase diagram investigated within the ISTC CORPHAD project are interpreted in relation with the MASCA results. In METCOR tests the equilibrium is established between oxidic liquid and mushy metallic part of the system. Results of comparison are discussed and the implications for IVR noted.

  9. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  10. Circulation-based Modeling of Gravity Currents

    Science.gov (United States)

    Meiburg, E. H.; Borden, Z.

    2013-05-01

    Atmospheric and oceanic flows driven by predominantly horizontal density differences, such as sea breezes, thunderstorm outflows, powder snow avalanches, and turbidity currents, are frequently modeled as gravity currents. Efforts to develop simplified models of such currents date back to von Karman (1940), who considered a two-dimensional gravity current in an inviscid, irrotational and infinitely deep ambient. Benjamin (1968) presented an alternative model, focusing on the inviscid, irrotational flow past a gravity current in a finite-depth channel. More recently, Shin et al. (2004) proposed a model for gravity currents generated by partial-depth lock releases, considering a control volume that encompasses both fronts. All of the above models, in addition to the conservation of mass and horizontal momentum, invoke Bernoulli's law along some specific streamline in the flow field, in order to obtain a closed system of equations that can be solved for the front velocity as function of the current height. More recent computational investigations based on the Navier-Stokes equations, on the other hand, reproduce the dynamics of gravity currents based on the conservation of mass and momentum alone. We propose that it should therefore be possible to formulate a fundamental gravity current model without invoking Bernoulli's law. The talk will show that the front velocity of gravity currents can indeed be predicted as a function of their height from mass and momentum considerations alone, by considering the evolution of interfacial vorticity. This approach does not require information on the pressure field and therefore avoids the need for an energy closure argument such as those invoked by the earlier models. Predictions by the new theory are shown to be in close agreement with direct numerical simulation results. References Von Karman, T. 1940 The engineer grapples with nonlinear problems, Bull. Am. Math Soc. 46, 615-683. Benjamin, T.B. 1968 Gravity currents and related

  11. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2000-07-01

    A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and

  12. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    Directory of Open Access Journals (Sweden)

    Joachimiak Damian

    2015-06-01

    Full Text Available Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  13. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  14. Guiding center model to interpret neutral particle analyzer results

    Science.gov (United States)

    Englert, G. W.; Reinmann, J. J.; Lauver, M. R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically precribed. The flux into the neutral particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius.

  15. 1-g model loading tests: methods and results

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav

    1999-01-01

    Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering

  16. Considerations on Modeling Strategies of the Financial Result

    Directory of Open Access Journals (Sweden)

    Lucian Cernuşca

    2012-12-01

    Full Text Available This study's objective is to highlight some of the strategies to maximize or minimize the accounting result, situated un-der the impulse of bad accounting. Although we assist the manipulation of the accounting result, this procedure is done according to the law, been exploited by some entities in knowledge of the lack of justice and accounting regulations.

  17. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  18. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    Science.gov (United States)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  19. DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2016-01-01

    Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of

  20. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  1. Human action perspectives based on individual plant examination results

    International Nuclear Information System (INIS)

    Forester, J.; Thompson, C.; Drouin, M.; Lois, E.

    1996-01-01

    This paper provides perspectives on human actions gained from reviewing 76 individual plant examination (IPE) submittals. Human actions found to be important in boiling water reactors (BWRs) and in pressurized water reactors (PWRs) are presented and the events most frequently found important are discussed. Since there are numerous factors that can influence the quantification of human error probabilities (HEPs) and introduce significant variability in the resulting HEPs (which in turn can influence which events are found to be important), the variability in HEPs for similar events across IPEs is examined to assess the extent to which variability in results is due to real versus artifactual differences. Finally, similarities and differences in human action observations across BWRs and PWRs are examined

  2. Severe accident progression perspectives based on IPE results

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Drouin, M.

    1996-01-01

    Accident progression perspectives were gathered from the level 2 PRA analyses (the analysis of the accident after core damage has occurred involving the containment performance and the radionuclide release from the containment) described in the IPE submittals. Insights related to the containment failure modes, the releases associated with those failure modes, and the factors responsible for the types of containment failures and release sizes reported were obtained. Complete results are discussed in NUREG-1560 and summarized here

  3. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  4. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  5. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  6. MCNP Modeling Results for Location of Buried TRU Waste Drums

    International Nuclear Information System (INIS)

    Steinman, D K; Schweitzer, J S

    2006-01-01

    In the 1960's, fifty-five gallon drums of TRU waste were buried in shallow pits on remote U.S. Government facilities such as the Idaho National Engineering Laboratory (now split into the Idaho National Laboratory and the Idaho Completion Project [ICP]). Subsequently, it was decided to remove the drums and the material that was in them from the burial pits and send the material to the Waste Isolation Pilot Plant in New Mexico. Several technologies have been tried to locate the drums non-intrusively with enough precision to minimize the chance for material to be spread into the environment. One of these technologies is the placement of steel probe holes in the pits into which wireline logging probes can be lowered to measure properties and concentrations of material surrounding the probe holes for evidence of TRU material. There is also a concern that large quantities of volatile organic compounds (VOC) are also present that would contaminate the environment during removal. In 2001, the Idaho National Engineering and Environmental Laboratory (INEEL) built two pulsed neutron wireline logging tools to measure TRU and VOC around the probe holes. The tools are the Prompt Fission Neutron (PFN) and the Pulsed Neutron Gamma (PNG), respectively. They were tested experimentally in surrogate test holes in 2003. The work reported here estimates the performance of the tools using Monte-Carlo modelling prior to field deployment. A MCNP model was constructed by INEEL personnel. It was modified by the authors to assess the ability of the tools to predict quantitatively the position and concentration of TRU and VOC materials disposed around the probe holes. The model was used to simulate the tools scanning the probe holes vertically in five centimetre increments. A drum was included in the model that could be placed near the probe hole and at other locations out to forty-five centimetres from the probe-hole in five centimetre increments. Scans were performed with no chlorine in the

  7. Solar activity variations of ionosonde measurements and modeling results

    Czech Academy of Sciences Publication Activity Database

    Altadill, D.; Arrazola, D.; Blanch, E.; Burešová, Dalia

    2008-01-01

    Roč. 42, č. 4 (2008), s. 610-616 ISSN 0273-1177 R&D Projects: GA AV ČR 1QS300120506 Grant - others:MCYT(ES) REN2003-08376-C02-02; CSIC(XE) 2004CZ0002; AGAUR(XE) 2006BE00112; AF Research Laboratory(XE) FA8718-L-0072 Institutional research plan: CEZ:AV0Z30420517 Keywords : mid-latitude ionosphere * bottomside modeling * ionospheric variability Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.860, year: 2008 http://www.sciencedirect.com/science/journal/02731177

  8. The calculation of exchange forces: General results and specific models

    International Nuclear Information System (INIS)

    Scott, T.C.; Babb, J.F.; Dalgarno, A.; Morgan, J.D. III

    1993-01-01

    In order to clarify questions about the calculation of the exchange energy of a homonuclear molecular ion, an analysis is carried out of a model problem consisting of the one-dimensional limit of H 2 + . It is demonstrated that the use of the infinite polarization expansion for the localized wave function in the Holstein--Herring formula yields an approximate exchange energy which at large internuclear distances R has the correct leading behavior to O(e -R ) and is close to but not equal to the exact exchange energy. The extension to the n-dimensional double-well problem is presented

  9. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  10. Student Entrepreneurship in Hungary: Selected Results Based on GUESSS Survey

    Directory of Open Access Journals (Sweden)

    Andrea S. Gubik

    2016-12-01

    Full Text Available Objective: This study investigates students’ entrepreneurial activities and aims to answer questions regarding to what extent do students utilize the knowledge gained during their studies and the personal connections acquired at universities, as well as what role a family business background plays in the development of students’ business start-ups. Research Design & Methods: This paper is based on the database of the GUESSS project investigates 658 student entrepreneurs (so-called ‘active entrepreneurs’ who have already established businesses of their own. Findings: The rate of self-employment among Hungarian students who study in tertiary education and consider themselves to be entrepreneurs is high. Motivations and entrepreneurial efforts differ from those who owns a larger company, they do not necessarily intend to make an entrepreneurial path a career option in the long run. A family business background and family support play a determining role in entrepreneurship and business start-ups, while entrepreneurial training and courses offered at higher institutions are not reflected in students’ entrepreneurial activities. Implications & Recommendations: Universities should offer not only conventional business courses (for example, business planning, but also new forms of education so that students meet various entrepreneurial tasks and problems, make decisions in different situations, explore and acquaint themselves with entrepreneurship. Contribution & Value Added: The study provides literature overview of youth entrepreneurship, describes the main characteristics of students’ enterprises and contributes to understanding the factors of youth entrepreneurship.

  11. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  12. Probabilistic Modeling of Updating Epistemic Uncertainty In Pile Capacity Prediction With a Single Failure Test Result

    Directory of Open Access Journals (Sweden)

    Indra Djati Sidi

    2017-12-01

    Full Text Available The model error N has been introduced to denote the discrepancy between measured and predicted capacity of pile foundation. This model error is recognized as epistemic uncertainty in pile capacity prediction. The statistics of N have been evaluated based on data gathered from various sites and may be considered only as a eneral-error trend in capacity prediction, providing crude estimates of the model error in the absence of more specific data from the site. The results of even a single load test to failure, should provide direct evidence of the pile capacity at a given site. Bayes theorem has been used as a rational basis for combining new data with previous data to revise assessment of uncertainty and reliability. This study is devoted to the development of procedures for updating model error (N, and subsequently the predicted pile capacity with a results of single failure test.

  13. Guiding center model to interpret neutral particle analyzer results

    International Nuclear Information System (INIS)

    Englert, G.W.; Reinmann, J.J.; Lauver, M.R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically prescribed. The flux into the neutron particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius. (U.S.)

  14. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha; Kalogerakis, Evangelos; Guibas, Leonidas; Koltun, Vladlen

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling

  15. Use of Physiologically Based Pharmacokinetic (PBPK) Models ...

    Science.gov (United States)

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.

  16. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  17. New experimental results in atlas-based brain morphometry

    Science.gov (United States)

    Gee, James C.; Fabella, Brian A.; Fernandes, Siddharth E.; Turetsky, Bruce I.; Gur, Ruben C.; Gur, Raquel E.

    1999-05-01

    In a previous meeting, we described a computational approach to MRI morphometry, in which a spatial warp mapping a reference or atlas image into anatomic alignment with the subject is first inferred. Shape differences with respect to the atlas are then studied by calculating the pointwise Jacobian determinant for the warp, which provides a measure of the change in differential volume about a point in the reference as it transforms to its corresponding position in the subject. In this paper, the method is used to analyze sex differences in the shape and size of the corpus callosum in an ongoing study of a large population of normal controls. The preliminary results of the current analysis support findings in the literature that have observed the splenium to be larger in females than in males.

  18. An interactive web-based extranet system model for managing ...

    African Journals Online (AJOL)

    ... objectives for students, lecturers and parents to access and compute results ... The database will serve as repository of students' academic records over a ... Keywords: Extranet-Model, Interactive, Web-Based, Students, Academic, Records ...

  19. First Results of Modeling Radiation Belt Electron Dynamics with the SAMI3 Plasmasphere Model

    Science.gov (United States)

    Komar, C. M.; Glocer, A.; Huba, J.; Fok, M. C. H.; Kang, S. B.; Buzulukova, N.

    2017-12-01

    The radiation belts were one of the first discoveries of the Space Age some sixty years ago and radiation belt models have been improving since the discovery of the radiation belts. The plasmasphere is one region that has been critically important to determining the dynamics of radiation belt populations. This region of space plays a critical role in describing the distribution of chorus and magnetospheric hiss waves throughout the inner magnetosphere. Both of these waves have been shown to interact with energetic electrons in the radiation belts and can result in the energization or loss of radiation belt electrons. However, radiation belt models have been historically limited in describing the distribution of cold plasmaspheric plasma and have relied on empirically determined plasmasphere models. Some plasmasphere models use an azimuthally symmetric distribution of the plasmasphere which can fail to capture important plasmaspheric dynamics such as the development of plasmaspheric drainage plumes. Previous work have coupled the kinetic bounce-averaged Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model used to model ring current and radiation belt populations with the Block-adaptive Tree Solar wind Roe-type Upwind Scheme (BATSRUS) global magnetohydrodynamic model to self-consistently obtain the magnetospheric magnetic field and ionospheric potential. The present work will utilize this previous coupling and will additionally couple the SAMI3 plasmasphere model to better represent the dynamics on the plasmasphere and its role in determining the distribution of waves throughout the inner magnetosphere. First results on the relevance of chorus, hiss, and ultralow frequency waves on radiation belt electron dynamics will be discussed in context of the June 1st, 2013 storm-time dropout event.

  20. Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages

    Directory of Open Access Journals (Sweden)

    Kevin Holder

    2017-10-01

    specification list and were analyzed in detail. As a second basis, the research method uses a conscious expansion of graph-based design languages towards their applicability for requirements management. This expansion allows the handling of requirements through a graph-based design language model. The first two results of the presented research consist of a model of the gear system and a detailed model of requirements, both modelled in a graph-based design language. Further results are generated by a combination of the two models into one holistic model.

  1. Experimental results and modeling of a dynamic hohlraum on SATURN

    International Nuclear Information System (INIS)

    Derzon, M.S.; Allshouse, G.O.; Deeney, C.; Leeper, R.J.; Nash, T.J.; Matuska, W.; Peterson, D.L.; MacFarlane, J.J.; Ryutov, D.D.

    1998-06-01

    Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations

  2. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  3. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  4. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....

  5. Results of modeling advanced BWR fuel designs using CASMO-4

    International Nuclear Information System (INIS)

    Knott, D.; Edenius, M.

    1996-01-01

    Advanced BWR fuel designs from General Electric, Siemens and ABB-Atom have been analyzed using CASMO-4 and compared against fission rate distributions and control rod worths from MCNP. Included in the analysis were fuel storage rack configurations and proposed mixed oxide (MOX) designs. Results are also presented from several cycles of SIMULATE-3 core follow analysis, using nodal data generated by CASMO-4, for cycles in transition from 8x8 designs to advanced fuel designs. (author)

  6. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  7. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  8. Assessing flood risk at the global scale: model setup, results, and sensitivity

    International Nuclear Information System (INIS)

    Ward, Philip J; Jongman, Brenden; Weiland, Frederiek Sperna; Winsemius, Hessel C; Bouwman, Arno; Ligtvoet, Willem; Van Beek, Rens; Bierkens, Marc F P

    2013-01-01

    Globally, economic losses from flooding exceeded $19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP ($1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures. (letter)

  9.  Functional Results-Oriented Healthcare Leadership: A Novel Leadership Model

    Directory of Open Access Journals (Sweden)

    Salem Said Al-Touby

    2012-03-01

    Full Text Available  This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  10. Testing R&D-Based Endogenous Growth Models

    DEFF Research Database (Denmark)

    Kruse-Andersen, Peter Kjær

    2017-01-01

    R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...

  11. Exact results for the O( N ) model with quenched disorder

    Science.gov (United States)

    Delfino, Gesualdo; Lamsen, Noel

    2018-04-01

    We use scale invariant scattering theory to exactly determine the lines of renormalization group fixed points for O( N )-symmetric models with quenched disorder in two dimensions. Random fixed points are characterized by two disorder parameters: a modulus that vanishes when approaching the pure case, and a phase angle. The critical lines fall into three classes depending on the values of the disorder modulus. Besides the class corresponding to the pure case, a second class has maximal value of the disorder modulus and includes Nishimori-like multicritical points as well as zero temperature fixed points. The third class contains critical lines that interpolate, as N varies, between the first two classes. For positive N , it contains a single line of infrared fixed points spanning the values of N from √{2}-1 to 1. The symmetry sector of the energy density operator is superuniversal (i.e. N -independent) along this line. For N = 2 a line of fixed points exists only in the pure case, but accounts also for the Berezinskii-Kosterlitz-Thouless phase observed in presence of disorder.

  12. Modeling Framework and Results to Inform Charging Infrastructure Investments

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, Marc W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    The plug-in electric vehicle (PEV) market is experiencing rapid growth with dozens of battery electric (BEV) and plug-in hybrid electric (PHEV) models already available and billions of dollars being invested by automotive manufacturers in the PEV space. Electric range is increasing thanks to larger and more advanced batteries and significant infrastructure investments are being made to enable higher power fast charging. Costs are falling and PEVs are becoming more competitive with conventional vehicles. Moreover, new technologies such as connectivity and automation hold the promise of enhancing the value proposition of PEVs. This presentation outlines a suite of projects funded by the U.S. Department of Energy's Vehicle Technology Office to conduct assessments of the economic value and charging infrastructure requirements of the evolving PEV market. Individual assessments include national evaluations of PEV economic value (assuming 73M PEVs on the road in 2035), national analysis of charging infrastructure requirements (with community and corridor level resolution), and case studies of PEV ownership in Columbus, OH and Massachusetts.

  13. The Multipole Plasma Trap-PIC Modeling Results

    Science.gov (United States)

    Hicks, Nathaniel; Bowman, Amanda; Godden, Katarina

    2017-10-01

    A radio-frequency (RF) multipole structure is studied via particle-in-cell computer modeling, to assess the response of quasi-neutral plasma to the imposed RF fields. Several regimes, such as pair plasma, antimatter plasma, and conventional (ion-electron) plasma are considered. In the case of equal charge-to-mass ratio of plasma species, the effects of the multipole field are symmetric between positive and negative particles. In the case of a charge-to-mass disparity, the multipole RF parameters (frequency, voltage, structure size) may be chosen such that the light species (e.g. electrons) is strongly confined, while the heavy species (e.g. positive ions) does not respond to the RF field. In this case, the trapped negative space charge creates a potential well that then traps the positive species. 2D and 3D particle-in-cell simulations of this concept are presented, to assess plasma response and trapping dependences on multipole order, consequences of the formation of an RF plasma sheath, and the effects of an axial magnetic field. The scalings of trapped plasma parameters are explored in each of the mentioned regimes, to guide the design of prospective experiments investigating each. Supported by U.S. NSF/DOE Partnership in Basic Plasma Science and Engineering Grant PHY-1619615.

  14. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  15. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  16. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  17. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  18. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  19. Predictor-Based Model Reference Adaptive Control

    Science.gov (United States)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2010-01-01

    This paper is devoted to the design and analysis of a predictor-based model reference adaptive control. Stable adaptive laws are derived using Lyapunov framework. The proposed architecture is compared with the now classical model reference adaptive control. A simulation example is presented in which numerical evidence indicates that the proposed controller yields improved transient characteristics.

  20. Vaporization inside a mini microfin tube: experimental results and modeling

    Science.gov (United States)

    Diani, A.; Rossetto, L.

    2015-11-01

    This paper proposes a comparison among the common R134a and the extremely low GWP refrigerant R1234yf during vaporization inside a mini microfin tube. This microfin tube has an internal diameter of 2.4 mm, it has 40 fins, with a fin height of 0.12 mm. Due to the high heat transfer coefficients shown by this tube, this technology can lead to a refrigerant charge reduction. Tests were run in the Heat Transfer in Micro Geometries Lab of the Dipartimento di Ingegneria Industriale of the Università di Padova. Mass velocities range between 375 and 940 kg m-2 s-1, heat fluxes from 10 to 50 kW m-2, vapour qualities from 0.10 to 0.99, at a saturation temperature of 30°C. The comparison among the two fluids is proposed at the same operating conditions, in order to highlight the heat transfer and pressure drop differences among the two refrigerants. In addition, two correlations are proposed to estimate the heat transfer coefficient and frictional pressure drop during refrigerant flow boiling inside mini microfin tubes. These correlations well predict the experimental values, and thus they can be used as a useful tool to design evaporators based on these mini microfin tubes.

  1. Agent-Based Modeling in Systems Pharmacology.

    Science.gov (United States)

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.

  2. Stirling cryocooler test results and design model verification

    International Nuclear Information System (INIS)

    Shimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1990-01-01

    This paper reports on progress in developing a long-life Stirling cycle cryocooler for space borne applications. It presents the results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the regenerator design code used in its development. This machine achieved a cold-end temperature of 65 K while carrying a 1/2 Watt cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for both sweeping and sealing the machine working volumes. In addition, the double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. A PC compatible design code was developed for this design approach that calculates regenerator loss including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls

  3. The similia principle: results obtained in a cellular model system.

    Science.gov (United States)

    Wiegant, Fred; Van Wijk, Roeland

    2010-01-01

    This paper describes the results of a research program focused on the beneficial effect of low dose stress conditions that were applied according to the similia principle to cells previously disturbed by more severe stress conditions. In first instance, we discuss criteria for research on the similia principle at the cellular level. Then, the homologous ('isopathic') approach is reviewed, in which the initial (high dose) stress used to disturb cellular physiology and the subsequent (low dose) stress are identical. Beneficial effects of low dose stress are described in terms of increased cellular survival capacity and at the molecular level as an increase in the synthesis of heat shock proteins (hsps). Both phenomena reflect a stimulation of the endogenous cellular self-recovery capacity. Low dose stress conditions applied in a homologous approach stimulate the synthesis of hsps and enhance survival in comparison with stressed cells that were incubated in the absence of low dose stress conditions. Thirdly, the specificity of the low dose stress condition is described where the initial (high dose) stress is different in nature from the subsequently applied (low dose) stress; the heterologous or 'heteropathic' approach. The results support the similia principle at the cellular level and add to understanding of how low dose stress conditions influence the regulatory processes underlying self-recovery. In addition, the phenomenon of 'symptom aggravation' which is also observed at the cellular level, is discussed in the context of self-recovery. Finally, the difference in efficiency between the homologous and the heterologous approach is discussed; a perspective is indicated for further research; and the relationship between studies on the similia principle and the recently introduced concept of 'postconditioning hormesis' is emphasized. Copyright 2009 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  4. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  5. Modeling and knowledge acquisition processes using case-based inference

    Directory of Open Access Journals (Sweden)

    Ameneh Khadivar

    2017-03-01

    Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.

  6. Gradient based filtering of digital elevation models

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Andersen, Rune Carbuhn

    We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based...

  7. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  8. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  9. Comparison of the 1981 INEL dispersion data with results from a number of different models

    Energy Technology Data Exchange (ETDEWEB)

    Lewellen, W S; Sykes, R I; Parker, S F

    1985-05-01

    The results from simulations by 12 different dispersion models are compared with observations from an extensive field experiment conducted by the Nuclear Regulatory Commission at the Idaho National Engineering Laboratory in July, 1981. Comparisons were made on the bases of hourly SF/sub 6/ samples taken at the surface, out to approximately 10 km from the 46 m release tower, both during and following 7 different 8-hour releases. Comparisons are also made for total integrated doses collected out to approximately 40 km. Three classes of models are used. Within the limited range appropriate for Class A models this data comparison shows that neither the puff models or the transport and diffusion models agree with the data any better than the simple Gaussian plume models. The puff and transport and diffusion models do show a slight edge in performance in comparison with the total dose over the extended range approximate for class B models. The best model results for the hourly samples show approximately 40% calculated within a factor of two when a 15/sup 0/ uncertainty in plume position is permitted and it is assumed that higher data samples may occur at stations between the actual sample sites. This is increased to 60% for the 12 hour integrated dose and 70% for the total integrated dose when the same performance measure is used. None of the models reproduce the observed patchy dose patterns. This patchiness is consistent with the discussion of the inherent uncertainty associated with time averaged plume observations contained in our companion reports on the scientific critique of available models.

  10. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  11. Modeling results for a linear simulator of a divertor

    International Nuclear Information System (INIS)

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-01-01

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach ∼ 1 Gw/m 2 along the magnetic fieldlines and > 10 MW/m 2 on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report

  12. Modelling combustion reactions for gas flaring and its resulting emissions

    Directory of Open Access Journals (Sweden)

    O. Saheed Ismail

    2016-07-01

    Full Text Available Flaring of associated petroleum gas is an age long environmental concern which remains unabated. Flaring of gas maybe a very efficient combustion process especially steam/air assisted flare and more economical than utilization in some oil fields. However, it has serious implications for the environment. This study considered different reaction types and operating conditions for gas flaring. Six combustion equations were generated using the mass balance concept with varying air and combustion efficiency. These equations were coded with a computer program using 12 natural gas samples of different chemical composition and origin to predict the pattern of emission species from gas flaring. The effect of key parameters on the emission output is also shown. CO2, CO, NO, NO2 and SO2 are the anticipated non-hydrocarbon emissions of environmental concern. Results show that the quantity and pattern of these chemical species depended on percentage excess/deficiency of stoichiometric air, natural gas type, reaction type, carbon mass content, impurities, combustion efficiency of the flare system etc. These emissions degrade the environment and human life, so knowing the emission types, pattern and flaring conditions that this study predicts is of paramount importance to governments, environmental agencies and the oil and gas industry.

  13. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  14. Improving the natural gas transporting based on the steady state simulation results

    International Nuclear Information System (INIS)

    Szoplik, Jolanta

    2016-01-01

    The work presents an example of practical application of gas flow modeling results in the network, that was obtained for the existing gas network and for real data about network load depending on the time of day and air temperature. The gas network load in network connections was estimated based on real data concerning gas consumption by customers and weather data in 2010, based on two-parametric model based on the number of degree-days of heating. The aim of this study was to elaborate a relationship between pressure and gas stream introduced into the gas network. It was demonstrated that practical application of elaborated relationship in gas reduction station allows for the automatic adjustment of gas pressure in the network to the volume of network load and maintenance of gas pressure in the whole network at possibly the lowest level. It was concluded based on the results obtained that such an approach allows to reduce the amount of gas supplied to the network by 0.4% of the annual network load. - Highlights: • Determination of the hourly nodal demand for gas by the consumers. • Analysis of the results of gas flow simulation in pipeline network. • Elaboration of the relationship between gas pressure and gas stream feeding the network. • Automatic gas pressure steering in the network depending on the network load. • Comparison of input gas pressure in the system without and with pressure steering.

  15. Environment modelling in near Earth space: Preliminary LDEF results

    Science.gov (United States)

    Coombs, C. R.; Atkinson, D. R.; Wagner, J. D.; Crowell, L. B.; Allbrooks, M.; Watts, A. J.

    1992-01-01

    Hypervelocity impacts by space debris cause not only local cratering or penetrations, but also cause large areas of damage in coated, painted or laminated surfaces. Features examined in these analyses display interesting morphological characteristics, commonly exhibiting a concentric ringed appearance. Virtually all features greater than 0.2 mm in diameter possess a spall zone in which all of the paint was removed from the aluminum surface. These spall zones vary in size from approximately 2 - 5 crater diameters. The actual craters in the aluminum substrate vary from central pits without raised rims, to morphologies more typical of craters formed in aluminum under hypervelocity laboratory conditions for the larger features. Most features also possess what is referred to as a 'shock zone' as well. These zones vary in size from approximately 1 - 20 crater diameters. In most cases, only the outer-most layer of paint was affected by this impact related phenomenon. Several impacts possess ridge-like structures encircling the area in which this outer-most paint layer was removed. In many ways, such features resemble the lunar impact basins, but on an extremely reduced scale. Overall, there were no noticeable penetrations, bulges or spallation features on the backside of the tray. On Row 12, approximately 85 degrees from the leading edge (RAM direction), there was approximately one impact per 15 cm(exp 2). On the trailing edge, there was approximately one impact per 72 cm(exp 2). Currently, craters on four aluminum experiment trays from Bay E09, directly on the leading edge are being measured and analyzed. Preliminary results have produced more than 2200 craters on approximately 1500 cm(exp 2) - or approximately 1 impact per 0.7 cm(exp 2).

  16. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  17. Exact results for survival probability in the multistate Landau-Zener model

    International Nuclear Information System (INIS)

    Volkov, M V; Ostrovsky, V N

    2004-01-01

    An exact formula is derived for survival probability in the multistate Landau-Zener model in the special case where the initially populated state corresponds to the extremal (maximum or minimum) slope of a linear diabatic potential curve. The formula was originally guessed by S Brundobler and V Elzer (1993 J. Phys. A: Math. Gen. 26 1211) based on numerical calculations. It is a simple generalization of the expression for the probability of diabatic passage in the famous two-state Landau-Zener model. Our result is obtained via analysis and summation of the entire perturbation theory series

  18. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    Science.gov (United States)

    Tankiewicz, Matylda; Kawa, Marek

    2017-09-01

    The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay - a sedimentary soil occurring in central Poland which is characterized by the so-called "layered microstructure". The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  19. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  20. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Science.gov (United States)

    Brewe, Eric; Traxler, Adrienne; de la Garza, Jorge; Kramer, Laird H.

    2013-12-01

    We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26-0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  1. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Directory of Open Access Journals (Sweden)

    Eric Brewe

    2013-10-01

    Full Text Available We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26–0.64. These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  2. Multi-Domain Modeling Based on Modelica

    Directory of Open Access Journals (Sweden)

    Liu Jun

    2016-01-01

    Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.

  3. SEP modeling based on global heliospheric models at the CCMC

    Science.gov (United States)

    Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Bain, H. M.; Schwadron, N.; Gorby, M.; Li, Y.; Lee, K.; Zeitlin, C.; Jian, L. K.; Lee, C. O.; Mewaldt, R. A.; Galvin, A. B.

    2017-12-01

    Heliospheric models provide contextual information of conditions in the heliosphere, including the background solar wind conditions and shock structures, and are used as input to SEP models, providing an essential tool for understanding SEP properties. The global 3D MHD WSA-ENLIL+Cone model provides a time-dependent background heliospheric description, into which a spherical shaped hydrodynamic CME can be inserted. ENLIL simulates solar wind parameters and additionally one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. An accurate representation of the background solar wind is necessary for simulating transients. ENLIL simulations also drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and the Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al. 2010). The Community Coordinated Modeling Center (CCMC) is in the process of making these SEP models available to the community and offering a system to run SEP models driven by a variety of heliospheric models available at CCMC. SEPMOD injects protons onto a sequence of observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. The coupled SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. The coupled ENLIL and SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. In this presentation we demonstrate several case studies of SEP event modeling at different observers based on WSA-ENLIL+Cone simulations.

  4. Lessons from wet gas flow metering systems using differential measurements devices: Testing and flow modelling results

    Energy Technology Data Exchange (ETDEWEB)

    Cazin, J.; Couput, J.P.; Dudezert, C. et al

    2005-07-01

    A significant number of wet gas meters used for high GVF and very high GVF are based on differential pressure measurements. Recent high pressure tests performed on a variety of different DP devices on different flow loops are presented. Application of existing correlations is discussed for several DP devices including Venturi meters. For Venturi meters, deviations vary from 9% when using the Murdock correlation to less than 3 % with physical based models. The use of DP system in a large domain of conditions (Water Liquid Ratio) especially for liquid estimation will require information on the WLR This obviously raises the question of the gas and liquid flow metering accuracy in wet gas meters and highlight needs to understand AP systems behaviour in wet gas flows (annular / mist / annular mist). As an example, experimental results obtained on the influence of liquid film characteristics on a Venturi meter are presented. Visualizations of the film upstream and inside the Venturi meter are shown. They are completed by film characterization. The AP measurements indicate that for a same Lockhart Martinelli parameter, the characteristics of the two phase flow have a major influence on the correlation coefficient. A 1D model is defined and the results are compared with the experiments. These results indicate that the flow regime influences the AP measurements and that a better modelling of the flow phenomena is needed even for allocation purposes. Based on that, lessons and way forward in wet gas metering systems improvement for allocation and well metering are discussed and proposed. (author) (tk)

  5. Comparison of inverse modeling results with measured and interpolated hydraulic head data

    International Nuclear Information System (INIS)

    Jacobson, E.A.

    1986-12-01

    Inverse modeling of aquifers involves identification of effective parameters, such as transmissivities, based on hydraulic head data. The result of inverse modeling is a calibrated ground water flow model that reproduces the measured hydraulic head data as closely as is statistically possible. An inverse method that includes prior information about the parameters (i.e., kriged log transmissivity) was applied to the Avra Valley aquifer of southern Arizona using hydraulic heads obtained in three ways: measured at well locations, estimated at nodes by hand contouring, and estimated at nodes by kriging. Hand contouring yields only estimates of hydraulic head at node points, whereas kriging yields hydraulic head estimates at node points and their corresponding estimation errors. A comparison of the three inverse applications indicates the variations in the ground water flow model caused by the different treatments of the hydraulic head data. Estimates of hydraulic head computed by all three inverse models were more representative of the measured or interpolated hydraulic heads than those computed using the kriged estimates of log transmissivity. The large-scale trends in the estimates of log transmissivity determined by the three inverse models were generally similar except in the southern portion of the study area. The hydraulic head values and gradients produced by the three inverse models were similar in the interior of the study area, while the major differences between the inverse models occurred along the boundaries. 17 refs., 18 figs., 1 tab

  6. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  7. Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun

    2017-03-01

    The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.

  8. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.

  9. Event-based Simulation Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    De Raedt, H.; Michielsen, K.; Jaeger, G; Khrennikov, A; Schlosshauer, M; Weihs, G

    2011-01-01

    We present a corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one. The event-based corpuscular model gives a unified

  10. Structuring Qualitative Data for Agent-Based Modelling

    NARCIS (Netherlands)

    Ghorbani, Amineh; Dijkema, Gerard P.J.; Schrauwen, Noortje

    2015-01-01

    Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a

  11. Event-Based Corpuscular Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    Michielsen, K.; Jin, F.; Raedt, H. De

    A corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one is presented. The event-based corpuscular model is shown to give a

  12. Higher plant modelling for life support applications: first results of a simple mechanistic model

    Science.gov (United States)

    Hezard, Pauline; Dussap, Claude-Gilles; Sasidharan L, Swathy

    2012-07-01

    In the case of closed ecological life support systems, the air and water regeneration and food production are performed using microorganisms and higher plants. Wheat, rice, soybean, lettuce, tomato or other types of eatable annual plants produce fresh food while recycling CO2 into breathable oxygen. Additionally, they evaporate a large quantity of water, which can be condensed and used as potable water. This shows that recycling functions of air revitalization and food production are completely linked. Consequently, the control of a growth chamber for higher plant production has to be performed with efficient mechanistic models, in order to ensure a realistic prediction of plant behaviour, water and gas recycling whatever the environmental conditions. Purely mechanistic models of plant production in controlled environments are not available yet. This is the reason why new models must be developed and validated. This work concerns the design and test of a simplified version of a mathematical model coupling plant architecture and mass balance purposes in order to compare its results with available data of lettuce grown in closed and controlled chambers. The carbon exchange rate, water absorption and evaporation rate, biomass fresh weight as well as leaf surface are modelled and compared with available data. The model consists of four modules. The first one evaluates plant architecture, like total leaf surface, leaf area index and stem length data. The second one calculates the rate of matter and energy exchange depending on architectural and environmental data: light absorption in the canopy, CO2 uptake or release, water uptake and evapotranspiration. The third module evaluates which of the previous rates is limiting overall biomass growth; and the last one calculates biomass growth rate depending on matter exchange rates, using a global stoichiometric equation. All these rates are a set of differential equations, which are integrated with time in order to provide

  13. Ecosystem health pattern analysis of urban clusters based on emergy synthesis: Results and implication for management

    International Nuclear Information System (INIS)

    Su, Meirong; Fath, Brian D.; Yang, Zhifeng; Chen, Bin; Liu, Gengyuan

    2013-01-01

    The evaluation of ecosystem health in urban clusters will help establish effective management that promotes sustainable regional development. To standardize the application of emergy synthesis and set pair analysis (EM–SPA) in ecosystem health assessment, a procedure for using EM–SPA models was established in this paper by combining the ability of emergy synthesis to reflect health status from a biophysical perspective with the ability of set pair analysis to describe extensive relationships among different variables. Based on the EM–SPA model, the relative health levels of selected urban clusters and their related ecosystem health patterns were characterized. The health states of three typical Chinese urban clusters – Jing-Jin-Tang, Yangtze River Delta, and Pearl River Delta – were investigated using the model. The results showed that the health status of the Pearl River Delta was relatively good; the health for the Yangtze River Delta was poor. As for the specific health characteristics, the Pearl River Delta and Yangtze River Delta urban clusters were relatively strong in Vigor, Resilience, and Urban ecosystem service function maintenance, while the Jing-Jin-Tang was relatively strong in organizational structure and environmental impact. Guidelines for managing these different urban clusters were put forward based on the analysis of the results of this study. - Highlights: • The use of integrated emergy synthesis and set pair analysis model was standardized. • The integrated model was applied on the scale of an urban cluster. • Health patterns of different urban clusters were compared. • Policy suggestions were provided based on the health pattern analysis

  14. Elastoplastic cup model for cement-based materials

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2010-03-01

    Full Text Available Based on experimental data obtained from triaxial tests and a hydrostatic test, a cup model was formulated. Two plastic mechanisms, respectively a deviatoric shearing and a pore collapse, are taken into account. This model also considers the influence of confining pressure. In this paper, the calibration of the model is detailed and numerical simulations of the main mechanical behavior of cement paste over a large range of stress are described, showing good agreement with experimental results. The case study shows that this cup model has extensive applicability for cement-based materials and other quasi-brittle and high-porosity materials in a complex stress state.

  15. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    . The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows to move the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we implement on a humanoid robot a model of gaze stabilization...... based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...

  16. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  17. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  18. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  19. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  20. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  1. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    . Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

  2. Automated extraction of knowledge for model-based diagnostics

    Science.gov (United States)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  3. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  4. Power-Based Setpoint Control : Experimental Results on a Planar Manipulator

    NARCIS (Netherlands)

    Dirksz, D. A.; Scherpen, J. M. A.

    In the last years the power-based modeling framework, developed in the sixties to model nonlinear electrical RLC networks, has been extended for modeling and control of a larger class of physical systems. In this brief we apply power-based integral control to a planar manipulator experimental setup.

  5. Business Models for NFC based mobile payments

    OpenAIRE

    Johannes Sang Un Chae; Jonas Hedman

    2015-01-01

    Purpose: The purpose of the paper is to develop a business model framework for NFC based mobile payment solutions consisting of four mutually interdepended components: the value service, value network, value architecture, and value finance. Design: Using a comparative case study method, the paper investigates Google Wallet and ISIS Mobile Wallet and their underlying business models. Findings: Google Wallet and ISIS Mobile Wallet are focusing on providing an enhanced customer experienc...

  6. Quality Model Based on Cots Quality Attributes

    OpenAIRE

    Jawad Alkhateeb; Khaled Musa

    2013-01-01

    The quality of software is essential to corporations in making their commercial software. Good or poorquality to software plays an important role to some systems such as embedded systems, real-time systems,and control systems that play an important aspect in human life. Software products or commercial off theshelf software are usually programmed based on a software quality model. In the software engineeringfield, each quality model contains a set of attributes or characteristics that drives i...

  7. A Multiagent Based Model for Tactical Planning

    Science.gov (United States)

    2002-10-01

    Pub. Co. 1985. [10] Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such

  8. An analytical model for nanoparticles concentration resulting from infusion into poroelastic brain tissue.

    Science.gov (United States)

    Pizzichelli, G; Di Michele, F; Sinibaldi, E

    2016-02-01

    We consider the infusion of a diluted suspension of nanoparticles (NPs) into poroelastic brain tissue, in view of relevant biomedical applications such as intratumoral thermotherapy. Indeed, the high impact of the related pathologies motivates the development of advanced therapeutic approaches, whose design also benefits from theoretical models. This study provides an analytical expression for the time-dependent NPs concentration during the infusion into poroelastic brain tissue, which also accounts for particle binding onto cells (by recalling relevant results from the colloid filtration theory). Our model is computationally inexpensive and, compared to fully numerical approaches, permits to explicitly elucidate the role of the involved physical aspects (tissue poroelasticity, infusion parameters, NPs physico-chemical properties, NP-tissue interactions underlying binding). We also present illustrative results based on parameters taken from the literature, by considering clinically relevant ranges for the infusion parameters. Moreover, we thoroughly assess the model working assumptions besides discussing its limitations. While not laying any claims of generality, our model can be used to support the development of more ambitious numerical approaches, towards the preliminary design of novel therapies based on NPs infusion into brain tissue. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Daily air quality forecast (gases and aerosols) over Switzerland. Modeling tool description and first results analysis.

    Science.gov (United States)

    Couach, O.; Kirchner, F.; Porchet, P.; Balin, I.; Parlange, M.; Balin, D.

    2009-04-01

    Map3D, the acronym for "Mesoscale Air Pollution 3D modelling", was developed at the EFLUM laboratory (EPFL) and received an INNOGRANTS awards in Summer 2007 in order to move from a research phase to a professional product giving daily air quality forecast. It is intended to give an objective base for political decisions addressing the improvement of regional air quality. This tool is a permanent modelling system which provides daily forecast of the local meteorology and the air pollutant (gases and particles) concentrations. Map3D has been successfully developed and calculates each day at the EPFL site a three days air quality forecast over Europe and the Alps with 50 km and 15 km resolution, respectively (see http://map3d.epfl.ch). The Map3D user interface is a web-based application with a PostgreSQL database. It is written in object-oriented PHP5 on a MVC (Model-View-Controller) architecture. Our prediction system is operational since August 2008. A first validation of the calculations for Switzerland is performed for the period of August 2008 - January 2009 comparing the model results for O3, NO2 and particulates with the results of the Nabel measurements stations. The subject of air pollution regimes (NOX/VOC) and specific indicators application with the forecast will be also addressed.

  10. Application of the IPCC model to a Brazilian landfill: First results

    International Nuclear Information System (INIS)

    Penteado, Roger; Cavalli, Massimo; Magnano, Enrico; Chiampo, Fulvia

    2012-01-01

    The Intergovernmental Panel on Climate Change gave a methodology to estimate the methane emissions from Municipal Solid Wastes landfills, based on a First Order Decay (FOD) model that assumes biodegradation kinetics depending on the type of wastes. This model can be used to estimate both the National greenhouse gas emissions in the industrialized countries as well as the reductions of these emissions in the developing ones when the Clean Development Mechanism, as defined by the Kyoto Protocol, is implemented. In this paper, the FOD model has been use to evaluate the biogas flow rates emitted by a Brazilian landfill and the results have been compared to the extracted ones: some first results can be useful to evidence the weight of key parameters and do a correct use of the model. - Highlights: ► Landfill biogas is greenhouse gas and fuel at the same time. ► In developing countries its collection can implement Kyoto Protocol mechanisms. ► Biogas collection and exploiting become part of energy policy. ► Project economical balance is based on reliable estimates of generated quantities.

  11. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  12. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  13. Model Based Control of Reefer Container Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær

    This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together with the Da......This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together...

  14. CEAI: CCM-based email authorship identification model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2013-11-01

    Full Text Available In this paper we present a model for email authorship identification (EAI by employing a Cluster-based Classification (CCM technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature set to include some more interesting and effective features for email authorship identification (e.g., the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell. We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM-based models, as well as the models proposed by Iqbal et al. (2010, 2013 [1,2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors’ constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1,2].

  15. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  16. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  17. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  18. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  19. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  20. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  1. Implementing a continuum of care model for older people - results from a Swedish case study

    Directory of Open Access Journals (Sweden)

    Anna Duner

    2011-11-01

    Full Text Available Introduction: There is a need for integrated care and smooth collaboration between care-providing organisations and professions to create a continuum of care for frail older people. However, collaboration between organisations and professions is often problematic. The aim of this study was to examine the process of implementing a new continuum of care model in a complex organisational context, and illuminate some of the challenges involved. The introduced model strived to connect three organisations responsible for delivering health and social care to older people: the regional hospital, primary health care and municipal eldercare.Methods: The actions of the actors involved in the process of implementing the model were understood to be shaped by the actors' understanding, commitment and ability. This article is based on 44 qualitative interviews performed on four occasions with 26 key actors at three organisational levels within these three organisations.Results and conclusions: The results point to the importance of paying regard to the different cultures of the organisations when implementing a new model. The role of upper management emerged as very important. Furthermore, to be accepted, the model has to be experienced as effectively dealing with real problems in the everyday practice of the actors in the organisations, from the bottom to the top.

  2. Implementing a continuum of care model for older people - results from a Swedish case study

    Directory of Open Access Journals (Sweden)

    Anna Duner

    2011-11-01

    Full Text Available Introduction: There is a need for integrated care and smooth collaboration between care-providing organisations and professions to create a continuum of care for frail older people. However, collaboration between organisations and professions is often problematic. The aim of this study was to examine the process of implementing a new continuum of care model in a complex organisational context, and illuminate some of the challenges involved. The introduced model strived to connect three organisations responsible for delivering health and social care to older people: the regional hospital, primary health care and municipal eldercare. Methods: The actions of the actors involved in the process of implementing the model were understood to be shaped by the actors' understanding, commitment and ability. This article is based on 44 qualitative interviews performed on four occasions with 26 key actors at three organisational levels within these three organisations. Results and conclusions: The results point to the importance of paying regard to the different cultures of the organisations when implementing a new model. The role of upper management emerged as very important. Furthermore, to be accepted, the model has to be experienced as effectively dealing with real problems in the everyday practice of the actors in the organisations, from the bottom to the top.

  3. A New Method for a Virtue-Based Responsible Conduct of Research Curriculum: Pilot Test Results.

    Science.gov (United States)

    Berling, Eric; McLeskey, Chet; O'Rourke, Michael; Pennock, Robert T

    2018-02-03

    Drawing on Pennock's theory of scientific virtues, we are developing an alternative curriculum for training scientists in the responsible conduct of research (RCR) that emphasizes internal values rather than externally imposed rules. This approach focuses on the virtuous characteristics of scientists that lead to responsible and exemplary behavior. We have been pilot-testing one element of such a virtue-based approach to RCR training by conducting dialogue sessions, modeled upon the approach developed by Toolbox Dialogue Initiative, that focus on a specific virtue, e.g., curiosity and objectivity. During these structured discussions, small groups of scientists explore the roles they think the focus virtue plays and should play in the practice of science. Preliminary results have shown that participants strongly prefer this virtue-based model over traditional methods of RCR training. While we cannot yet definitively say that participation in these RCR sessions contributes to responsible conduct, these pilot results are encouraging and warrant continued development of this virtue-based approach to RCR training.

  4. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  5. Ligand based pharmacophore modelling of anticancer histone ...

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... are useful in predicting the biological activity of the compound or compound library by screening it ... with high affinity of binding toward a given protein ..... High- throughput structure-based pharmacophore modelling as a basis for successful parallel virtual screening. J. Comp. Aided Mol. Design, 20:.

  6. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  7. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  8. Modelling Web-Based Instructional Systems

    NARCIS (Netherlands)

    Retalis, Symeon; Avgeriou, Paris

    2002-01-01

    The size and complexity of modern instructional systems, which are based on the World Wide Web, bring about great intricacy in their crafting, as there is not enough knowledge or experience in this field. This imposes the use of new instructional design models in order to achieve risk-mitigation,

  9. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of

  10. Model-based auditing using REA

    NARCIS (Netherlands)

    Weigand, H.; Elsas, P.

    2012-01-01

    The recent financial crisis has renewed interest in the value of the owner-ordered auditing tradition that starts from society's long-term interest rather than management interest. This tradition uses a model-based auditing approach in which control requirements are derived in a principled way. A

  11. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  12. Whole body acid-base modeling revisited.

    Science.gov (United States)

    Ring, Troels; Nielsen, Søren

    2017-04-01

    The textbook account of whole body acid-base balance in terms of endogenous acid production, renal net acid excretion, and gastrointestinal alkali absorption, which is the only comprehensive model around, has never been applied in clinical practice or been formally validated. To improve understanding of acid-base modeling, we managed to write up this conventional model as an expression solely on urine chemistry. Renal net acid excretion and endogenous acid production were already formulated in terms of urine chemistry, and we could from the literature also see gastrointestinal alkali absorption in terms of urine excretions. With a few assumptions it was possible to see that this expression of net acid balance was arithmetically identical to minus urine charge, whereby under the development of acidosis, urine was predicted to acquire a net negative charge. The literature already mentions unexplained negative urine charges so we scrutinized a series of seminal papers and confirmed empirically the theoretical prediction that observed urine charge did acquire negative charge as acidosis developed. Hence, we can conclude that the conventional model is problematic since it predicts what is physiologically impossible. Therefore, we need a new model for whole body acid-base balance, which does not have impossible implications. Furthermore, new experimental studies are needed to account for charge imbalance in urine under development of acidosis. Copyright © 2017 the American Physiological Society.

  13. Solar Deployment System (SolarDS) Model: Documentation and Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Denholm, P.; Drury, E.; Margolis, R.

    2009-09-01

    The Solar Deployment System (SolarDS) model is a bottom-up, market penetration model that simulates the potential adoption of photovoltaics (PV) on residential and commercial rooftops in the continental United States through 2030. NREL developed SolarDS to examine the market competitiveness of PV based on regional solar resources, capital costs, electricity prices, utility rate structures, and federal and local incentives. The model uses the projected financial performance of PV systems to simulate PV adoption for building types and regions then aggregates adoption to state and national levels. The main components of SolarDS include a PV performance simulator, a PV annual revenue calculator, a PV financial performance calculator, a PV market share calculator, and a regional aggregator. The model simulates a variety of installed PV capacity for a range of user-specified input parameters. PV market penetration levels from 15 to 193 GW by 2030 were simulated in preliminary model runs. SolarDS results are primarily driven by three model assumptions: (1) future PV cost reductions, (2) the maximum PV market share assumed for systems with given financial performance, and (3) PV financing parameters and policy-driven assumptions, such as the possible future cost of carbon emissions.

  14. Recommendation based on trust diffusion model.

    Science.gov (United States)

    Yuan, Jinfeng; Li, Li

    2014-01-01

    Recommender system is emerging as a powerful and popular tool for online information relevant to a given user. The traditional recommendation system suffers from the cold start problem and the data sparsity problem. Many methods have been proposed to solve these problems, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the trust diffusion (DiffTrust) algorithm and the probabilistic matrix factorization (PMF). DiffTrust is first used to study the possible diffusions of trust between various users. It is able to make use of the implicit relationship of the trust network, thus alleviating the data sparsity problem. The probabilistic matrix factorization (PMF) is then employed to combine the users' tastes with their trusted friends' interests. We evaluate the algorithm on Flixster, Moviedata, and Epinions datasets, respectively. The experimental results show that the recommendation based on our proposed DiffTrust + PMF model achieves high performance in terms of the root mean square error (RMSE), Recall, and F Measure.

  15. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  16. Artificial Neural Network Based Model of Photovoltaic Cell

    Directory of Open Access Journals (Sweden)

    Messaouda Azzouzi

    2017-03-01

    Full Text Available This work concerns the modeling of a photovoltaic system and the prediction of the sensitivity of electrical parameters (current, power of the six types of photovoltaic cells based on voltage applied between terminals using one of the best known artificial intelligence technique which is the Artificial Neural Networks. The results of the modeling and prediction have been well shown as a function of number of iterations and using different learning algorithms to obtain the best results

  17. Agent Based Modeling as an Educational Tool

    Science.gov (United States)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  18. Mesoscopic model of actin-based propulsion.

    Directory of Open Access Journals (Sweden)

    Jie Zhu

    Full Text Available Two theoretical models dominate current understanding of actin-based propulsion: microscopic polymerization ratchet model predicts that growing and writhing actin filaments generate forces and movements, while macroscopic elastic propulsion model suggests that deformation and stress of growing actin gel are responsible for the propulsion. We examine both experimentally and computationally the 2D movement of ellipsoidal beads propelled by actin tails and show that neither of the two models can explain the observed bistability of the orientation of the beads. To explain the data, we develop a 2D hybrid mesoscopic model by reconciling these two models such that individual actin filaments undergoing nucleation, elongation, attachment, detachment and capping are embedded into the boundary of a node-spring viscoelastic network representing the macroscopic actin gel. Stochastic simulations of this 'in silico' actin network show that the combined effects of the macroscopic elastic deformation and microscopic ratchets can explain the observed bistable orientation of the actin-propelled ellipsoidal beads. To test the theory further, we analyze observed distribution of the curvatures of the trajectories and show that the hybrid model's predictions fit the data. Finally, we demonstrate that the model can explain both concave-up and concave-down force-velocity relations for growing actin networks depending on the characteristic time scale and network recoil. To summarize, we propose that both microscopic polymerization ratchets and macroscopic stresses of the deformable actin network are responsible for the force and movement generation.

  19. Physiologically Based Pharmacokinetic Modeling of Therapeutic Proteins.

    Science.gov (United States)

    Wong, Harvey; Chow, Timothy W

    2017-09-01

    Biologics or therapeutic proteins are becoming increasingly important as treatments for disease. The most common class of biologics are monoclonal antibodies (mAbs). Recently, there has been an increase in the use of physiologically based pharmacokinetic (PBPK) modeling in the pharmaceutical industry in drug development. We review PBPK models for therapeutic proteins with an emphasis on mAbs. Due to their size and similarity to endogenous antibodies, there are distinct differences between PBPK models for small molecules and mAbs. The high-level organization of a typical mAb PBPK model consists of a whole-body PBPK model with organ compartments interconnected by both blood and lymph flows. The whole-body PBPK model is coupled with tissue-level submodels used to describe key mechanisms governing mAb disposition including tissue efflux via the lymphatic system, elimination by catabolism, protection from catabolism binding to the neonatal Fc (FcRn) receptor, and nonlinear binding to specific pharmacological targets of interest. The use of PBPK modeling in the development of therapeutic proteins is still in its infancy. Further application of PBPK modeling for therapeutic proteins will help to define its developing role in drug discovery and development. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.