WorldWideScience

Sample records for model results based

  1. Dynamic model of the electrorheological fluid based on measurement results

    International Nuclear Information System (INIS)

    Krivenkov, K; Ulrich, S; Bruns, R

    2013-01-01

    To develop modern applications for vibration decoupling based on electrorheological fluids with suitable control strategies, an appropriate mathematical model of the ERF is necessary. The devices mostly used have annular-shape electrorheological valves. This requires the use of flow channels to measure the static and dynamic properties of the electrorheological fluids in similar flow conditions. Particularly for the identification of the dynamic behavior of the fluids, the influences of the non-electrorheological properties on the overall system must be taken into account. In this contribution three types of parameters with several nonlinear dependencies for the mapping of the static and dynamic properties of the ERF are considered: electro-rheological, hydraulic and electrical. The mathematical model introduced can precisely demonstrate the static and dynamic behavior of the electrorheological fluid and can be used for the future design of real systems for vibration decoupling or other systems with high dynamic requirements.

  2. Stress Resultant Based Elasto-Viscoplastic Thick Shell Model

    Directory of Open Access Journals (Sweden)

    Pawel Woelke

    2012-01-01

    Full Text Available The current paper presents enhancement introduced to the elasto-viscoplastic shell formulation, which serves as a theoretical base for the finite element code EPSA (Elasto-Plastic Shell Analysis [1–3]. The shell equations used in EPSA are modified to account for transverse shear deformation, which is important in the analysis of thick plates and shells, as well as composite laminates. Transverse shear forces calculated from transverse shear strains are introduced into a rate-dependent yield function, which is similar to Iliushin's yield surface expressed in terms of stress resultants and stress couples [12]. The hardening rule defined by Bieniek and Funaro [4], which allows for representation of the Bauschinger effect on a moment-curvature plane, was previously adopted in EPSA and is used here in the same form. Viscoplastic strain rates are calculated, taking into account the transverse shears. Only non-layered shells are considered in this work.

  3. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    Science.gov (United States)

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  4. An outcome-based learning model to identify emerging threats : experimental and simulation results.

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Moyano, I. J.; Conrad, S. H.; Andersen, D. F.; Decision and Information Sciences; SNL; Univ. at Albany

    2007-01-01

    The authors present experimental and simulation results of an outcome-based learning model as it applies to the identification of emerging threats. This model integrates judgment, decision making, and learning theories to provide an integrated framework for the behavioral study of emerging threats.

  5. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  6. A Tower Model for Lightning Overvoltage Studies Based on the Result of an FDTD Simulation

    Science.gov (United States)

    Noda, Taku

    This paper describes a method for deriving a transmission tower model for EMTP lightning overvoltage studies from a numerical electromagnetic simulation result obtained by the FDTD (Finite Difference Time Domain) method. The FDTD simulation carried out in this paper takes into account the following items which have been ignored or over-simplified in previously-presented simulations: (i) resistivity of the ground soil; (ii) arms, major slant elements, and foundations of the tower; (iii) development speed of the lightning return stroke. For validation purpose a pulse test of a 500-kV transmission tower is simulated, and a comparison with the measured result shows that the present FDTD simulation gives a sufficiently accurate result. Using this validated FDTD-based simulation method the insulator-string voltages of a tower for a lightning stroke are calculated, and based on the simulation result the parameter values of the proposed tower model for EMTP studies are determined in a systematic way. Since previously-presented models include trial-and-error process in the parameter determination, it can be said that the proposed model is more general in this regard. As an illustrative example, the 500-kV transmission tower mentioned above is modeled, and it is shown that the derived model closely reproduces the FDTD simulation result.

  7. Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling

    Science.gov (United States)

    Deng, F.; Chen, J.; Peters, W.; Krol, M.

    2008-12-01

    Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).

  8. EQUATIONS OF NONLINEAR SOIL DAMAGE BASED ON RESULTS OF TESTING OF LATERALLY LOADED PILE MODELS

    Directory of Open Access Journals (Sweden)

    Buslov Anatoliy Semenovich

    2012-12-01

    Full Text Available Results of testing of laterally loaded pile models demonstrate that the "load -to-displacement" dependency has a nonlinear character. This dependency may be regarded as linear within the interval of (0.2…0.3 Pul only. Tests were performed in a box with displacement indicators and power equipment. The pile model length was 200 mm, and its diameter was 40 mm. A hollow steel tube was used as the material for tested piles. Based on the analysis of testing results, a pattern of the non-linear damage of the base was formulated. According to the pattern, the increase of the load intensity (damage factor m=Ph/Pul involves an increase in the damage of the continuity, or the rebuff ability of the soil foundation.

  9. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  10. Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model

    International Nuclear Information System (INIS)

    Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka

    2005-01-01

    The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the

  11. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  12. Deep Learning Based Solar Flare Forecasting Model. I. Results for Line-of-sight Magnetograms

    Science.gov (United States)

    Huang, Xin; Wang, Huaning; Xu, Long; Liu, Jinfu; Li, Rong; Dai, Xinghua

    2018-03-01

    Solar flares originate from the release of the energy stored in the magnetic field of solar active regions, the triggering mechanism for these flares, however, remains unknown. For this reason, the conventional solar flare forecast is essentially based on the statistic relationship between solar flares and measures extracted from observational data. In the current work, the deep learning method is applied to set up the solar flare forecasting model, in which forecasting patterns can be learned from line-of-sight magnetograms of solar active regions. In order to obtain a large amount of observational data to train the forecasting model and test its performance, a data set is created from line-of-sight magnetogarms of active regions observed by SOHO/MDI and SDO/HMI from 1996 April to 2015 October and corresponding soft X-ray solar flares observed by GOES. The testing results of the forecasting model indicate that (1) the forecasting patterns can be automatically reached with the MDI data and they can also be applied to the HMI data; furthermore, these forecasting patterns are robust to the noise in the observational data; (2) the performance of the deep learning forecasting model is not sensitive to the given forecasting periods (6, 12, 24, or 48 hr); (3) the performance of the proposed forecasting model is comparable to that of the state-of-the-art flare forecasting models, even if the duration of the total magnetograms continuously spans 19.5 years. Case analyses demonstrate that the deep learning based solar flare forecasting model pays attention to areas with the magnetic polarity-inversion line or the strong magnetic field in magnetograms of active regions.

  13. Fugacity based modeling for pollutant fate and transport during floods. Preliminary results

    Science.gov (United States)

    Deda, M.; Fiorini, M.; Massabo, M.; Rudari, R.

    2010-09-01

    Fugacity based modeling for pollutant fate and transport during floods. Preliminary results Miranda Deda, Mattia Fiorini, Marco Massabò, Roberto Rudari One of the concerns that arises during floods is whether the wide-spreading of chemical contamination is associated with the flooding. Many potential sources of toxics releases during floods exists in cities or rural area; hydrocarbons fuel storage system, distribution facilities, commercial chemical storage, sewerage system are only few examples. When inundated homes and vehicles can also be source of toxics contaminants such as gasoline/diesel, detergents and sewage. Hazardous substances released into the environment are transported and dispersed in complex environmental systems that include air, plant, soil, water and sediment. Effective environmental models demand holistic modelling of the transport and transformation of the materials in the multimedia arena. Among these models, fugacity-based models are distribution based models incorporating all environmental compartments and are based on steady-state fluxes of pollutants across compartment interfaces (Mackay "Multimedia Environmental Models" 2001). They satisfy the primary objective of environmental chemistry which is to forecast the concentrations of pollutants in the environments with respect to space and time variables. Multimedia fugacity based-models has been used to assess contaminant distribution at very different spatial and temporal scales. The applications range from contaminant leaching to groundwater, runoff to surface water, partitioning in lakes and streams, distribution at regional and even global scale. We developped a two-dimensional fugacity based model for fate and transport of chemicals during floods. The model has three modules: the first module estimates toxins emission rates during floods; the second modules is the hydrodynamic model that simulates the water flood and the third module simulate the dynamic distribution of chemicals in

  14. Encouraging Sustainable Transport Choices in American Households: Results from an Empirically Grounded Agent-Based Model

    Directory of Open Access Journals (Sweden)

    Davide Natalini

    2013-12-01

    Full Text Available The transport sector needs to go through an extended process of decarbonisation to counter the threat of climate change. Unfortunately, the International Energy Agency forecasts an enormous growth in the number of cars and greenhouse gas emissions by 2050. Two issues can thus be identified: (1 the need for a new methodology that could evaluate the policy performances ex-ante and (2 the need for more effective policies. To help address these issues, we developed an Agent-Based Model called Mobility USA aimed at: (1 testing whether this could be an effective approach in analysing ex-ante policy implementation in the transport sector; and (2 evaluating the effects of alternative policy scenarios on commuting behaviours in the USA. Particularly, we tested the effects of two sets of policies, namely market-based and preference-change ones. The model results suggest that this type of agent-based approach will provide a useful tool for testing policy interventions and their effectiveness.

  15. An Outcrop-based Detailed Geological Model to Test Automated Interpretation of Seismic Inversion Results

    NARCIS (Netherlands)

    Feng, R.; Sharma, S.; Luthi, S.M.; Gisolf, A.

    2015-01-01

    Previously, Tetyukhina et al. (2014) developed a geological and petrophysical model based on the Book Cliffs outcrops that contained eight lithotypes. For reservoir modelling purposes, this model is judged to be too coarse because in the same lithotype it contains reservoir and non-reservoir

  16. Effects of naloxone distribution to likely bystanders: Results of an agent-based model.

    Science.gov (United States)

    Keane, Christopher; Egan, James E; Hawk, Mary

    2018-03-07

    Opioid overdose deaths in the US rose dramatically in the past 16 years, creating an urgent national health crisis with no signs of immediate relief. In 2017, the President of the US officially declared the opioid epidemic to be a national emergency and called for additional resources to respond to the crisis. Distributing naloxone to community laypersons and people at high risk for opioid overdose can prevent overdose death, but optimal distribution methods have not yet been pinpointed. We conducted a sequential exploratory mixed methods design using qualitative data to inform an agent-based model to improve understanding of effective community-based naloxone distribution to laypersons to reverse opioid overdose. The individuals in the model were endowed with cognitive and behavioral variables and accessed naloxone via community sites such as pharmacies, hospitals, and urgent-care centers. We compared overdose deaths over a simulated 6-month period while varying the number of distribution sites (0, 1, and 10) and number of kits given to individuals per visit (1 versus 10). Specifically, we ran thirty simulations for each of thirteen distribution models and report average overdose deaths for each. The baseline comparator was no naloxone distribution. Our simulations explored the effects of distribution through syringe exchange sites with and without secondary distribution, which refers to distribution of naloxone kits by laypersons within their social networks and enables ten additional laypersons to administer naloxone to reverse opioid overdose. Our baseline model with no naloxone distribution predicted there would be 167.9 deaths in a six month period. A single distribution site, even with 10 kits picked up per visit, decreased overdose deaths by only 8.3% relative to baseline. However, adding secondary distribution through social networks to a single site resulted in 42.5% fewer overdose deaths relative to baseline. That is slightly higher than the 39

  17. Channel Verification Results for the SCME models in a Multi-Probe Based MIMO OTA Setup

    DEFF Research Database (Denmark)

    Fan, Wei; Carreño, Xavier; S. Ashta, Jagjit

    2013-01-01

    , where the focus is on comparing results from various proposed methods. Channel model verification is necessary to ensure that the target channel models are correctly implemented inside the test area. This paper shows that the all the key parameters of the SCME models, i.e., power delay profile, temporal...

  18. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  19. Integrating social model principles into broad-based treatment: results of a program evaluation.

    Science.gov (United States)

    Polcin, Douglas L; Prindle, Suzi D; Bostrom, Alan

    2002-11-01

    Although traditional social model recovery programs appear to be decreasing, some aspects of social model recovery continue to exert a strong influence in broad-based, integrated programs. This article describes a four-week program that integrates licensed therapists, certified counselors, psychiatric consultation, and social model recovery principles into a broad-based treatment approach. The Social Model Philosophy Scale revealed a low overall rating on social model philosophy. However, social model principles that were heavily stressed included practicing 12-step recovery, the importance of getting a 12-step sponsor, staff-client interactions outside a formal office, employing staff who are in recovery, and emphasizing a goal of abstinence. Three- and six-month follow-up revealed significant improvement in alcohol and drug use, heavy alcohol use, satisfaction with family relationships, 12-step involvement, illegal behaviors, arrests, unsafe sex, self-esteem, use of medical resources, and health status. Findings provide a rationale for larger, multi-site studies that assess the effectiveness of social model characteristics using multivariate techniques.

  20. Atmospheric greenhouse gases retrieved from SCIAMACHY: comparison to ground-based FTS measurements and model results

    Directory of Open Access Journals (Sweden)

    O. Schneising

    2012-02-01

    Full Text Available SCIAMACHY onboard ENVISAT (launched in 2002 enables the retrieval of global long-term column-averaged dry air mole fractions of the two most important anthropogenic greenhouse gases carbon dioxide and methane (denoted XCO2 and XCH4. In order to assess the quality of the greenhouse gas data obtained with the recently introduced v2 of the scientific retrieval algorithm WFM-DOAS, we present validations with ground-based Fourier Transform Spectrometer (FTS measurements and comparisons with model results at eight Total Carbon Column Observing Network (TCCON sites providing realistic error estimates of the satellite data. Such validation is a prerequisite to assess the suitability of data sets for their use in inverse modelling.

    It is shown that there are generally no significant differences between the carbon dioxide annual increases of SCIAMACHY and the assimilation system CarbonTracker (2.00 ± 0.16 ppm yr−1 compared to 1.94 ± 0.03 ppm yr−1 on global average. The XCO2 seasonal cycle amplitudes derived from SCIAMACHY are typically larger than those from TCCON which are in turn larger than those from CarbonTracker. The absolute values of the northern hemispheric TCCON seasonal cycle amplitudes are closer to SCIAMACHY than to CarbonTracker and the corresponding differences are not significant when compared with SCIAMACHY, whereas they can be significant for a subset of the analysed TCCON sites when compared with CarbonTracker. At Darwin we find discrepancies of the seasonal cycle derived from SCIAMACHY compared to the other data sets which can probably be ascribed to occurrences of undetected thin clouds. Based on the comparison with the reference data, we conclude that the carbon dioxide data set can be characterised by a regional relative precision (mean standard deviation of the differences of about 2.2 ppm and a relative accuracy (standard deviation of the mean differences

  1. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  2. Comparative Results on 3D Navigation of Quadrotor using two Nonlinear Model based Controllers

    Science.gov (United States)

    Bouzid, Y.; Siguerdidjane, H.; Bestaoui, Y.

    2017-01-01

    Recently the quadrotors are being increasingly employed in both military and civilian areas where a broad range of nonlinear flight control techniques are successfully implemented. With this advancement, it has become necessary to investigate the efficiency of these flight controllers by studying theirs features and compare their performance. In this paper, the control of Unmanned Aerial Vehicle (UAV) quadrotor, using two different approaches, is presented. The first controller is Nonlinear PID (NLPID) whilst the second one is Nonlinear Internal Model Control (NLIMC) that are used for the stabilization as well as for the 3D trajectory tracking. The numerical simulations have shown satisfactory results using nominal system model or disturbed model for both of them. The obtained results are analyzed with respect to several criteria for the sake of comparison.

  3. Modelling Inter-Particle Forces and Resulting Agglomerate Sizes in Cement-Based Materials

    DEFF Research Database (Denmark)

    Kjeldsen, Ane Mette; Geiker, Mette Rica

    2005-01-01

    The theory of inter-particle forces versus external shear in cement-based materials is reviewed. On this basis, calculations on maximum agglomerate size present after the combined action of superplasticizers and shear are carried out. Qualitative experimental results indicate that external shear...

  4. Energy and exergy analysis of an indirect solar cabinet dryer based on mathematical modeling results

    International Nuclear Information System (INIS)

    Sami, Samaneh; Etesami, Nasrin; Rahimi, Amir

    2011-01-01

    In the present study, using a previously developed dynamic mathematical model for performance analysis of an indirect cabinet solar dryer , a microscopic energy and exergy analysis for an indirect solar cabinet dryer is carried out. To this end, appropriate energy and exergy models are developed and using the predicted values for temperature and enthalpy of gas stream and the temperature, enthalpy and moisture content of the drying solid, the energy and exergy efficiencies are estimated. The validity of the model for predicting variations in gas and solid characteristics along the time and the length of the solar collector and/or dryer length was examined against some existing experimental data. The results show that in spite of high energy efficiency, the indirect solar cabinet dryer has relatively low exergy efficiency. Results show that the maximum exergy losses are in midday. Also the minimums of total exergy efficiency are 32.3% and 47.2% on the first and second days, respectively. Furthermore, the effect of some operating parameters, including length of the collector, its surface, and air flow rate was investigated on the exergy destruction and efficiency. -- Highlights: → In the literature, there are few studies on the energy and exergy analysis of solar cabinet dryers. → In the present study a microscopic energy and exergy analysis for an indirect solar cabinet dryer is carried out. → Effect of operating parameters, including collector length, and air flow rate was investigated on the exergy destruction and efficiency. → For collector section, the maximum values for outlet air temperature, outlet exergy and energy are 69 o C, 2.5 kW and 1.12 kW, respectively. → Increasing the air flow rate decreases the exergy efficiency of solar collector.

  5. Combustion synthesis of TiB2-based cermets: modeling and experimental results

    International Nuclear Information System (INIS)

    Martinez Pacheco, M.; Bouma, R.H.B.; Katgerman, L.

    2008-01-01

    TiB 2 -based cermets are prepared by combustion synthesis followed by a pressing stage in a granulate medium. Products obtained by combustion synthesis are characterized by a large remaining porosity (typically 50%). To produce dense cermets, a subsequent densification step is performed after the combustion process and when the reacted material is still hot. To design the process, numerical simulations are carried out and compared to experimental results. In addition, physical and electrical properties of the products related to electrical contact applications are evaluated. (orig.)

  6. Modelling an exploited marine fish community with 15 parameters - results from a simple size-based model

    NARCIS (Netherlands)

    Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.

    2006-01-01

    To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a

  7. AN ANIMAL MODEL OF SCHIZOPHRENIA BASED ON CHRONIC LSD ADMINISTRATION: OLD IDEA, NEW RESULTS

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D.; Nichols, David E.

    2011-01-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as “clearly a paranoid state.” We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT2A receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. PMID:21352832

  8. Spreading of intolerance under economic stress: Results from a reputation-based model

    Science.gov (United States)

    Martinez-Vaquero, Luis A.; Cuesta, José A.

    2014-08-01

    When a population is engaged in successive prisoner's dilemmas, indirect reciprocity through reputation fosters cooperation through the emergence of moral and action rules. A simplified model has recently been proposed where individuals choose between helping others or not and are judged good or bad for it by the rest of the population. The reputation so acquired will condition future actions. In this model, eight strategies (referred to as "leading eight") enforce a high level of cooperation, generate high payoffs, and are therefore resistant to invasions by other strategies. Here we show that, by assigning each individual one of two labels that peers can distinguish (e.g., political ideas, religion, and skin color) and allowing moral and action rules to depend on the label, intolerant behaviors can emerge within minorities under sufficient economic stress. We analyze the sets of conditions where this can happen and also discuss the circumstances under which tolerance can be restored. Our results agree with empirical observations that correlate intolerance and economic stress and predict a correlation between the degree of tolerance of a population and its composition and ethical stance.

  9. Differential hardening in IF steel - Experimental results and a crystal plasticity based model

    NARCIS (Netherlands)

    Mulder, J.; Eyckens, P.; van den Boogaard, Antonius H.; Hora, P.

    2015-01-01

    Work hardening in metals is commonly described by isotropic hardening, especially for monotonically increasing proportional loading. The relation between different stress states in this case is determined by equivalent stress and strain definitions, based on equal plastic dissipation. However,

  10. Experimental checking results of mathematical modeling of the radiation environment sensor based on diamond detectors

    International Nuclear Information System (INIS)

    Gladchenkov, E V; Kolyubin, V A; Nedosekin, P G; Zaharchenko, K V; Ibragimov, R F; Kadilin, V V; Tyurin, E M

    2017-01-01

    Were conducted a series of experiments, the purpose of which had to verify the mathematical model of the radiation environment sensor. Theoretical values of the beta particles count rate from 90 Sr - 90 Y source registered by radiation environment sensor was compared with the experimental one. Theoretical (calculated) count rate of beta particles was found with using the developed mathematical model of the radiation environment sensor. Deviation of the calculated values of the beta particle count rate does not exceed 10% from the experimental. (paper)

  11. Experimental and modelling results of a parallel-plate based active magnetic regenerator

    DEFF Research Database (Denmark)

    Tura, A.; Nielsen, Kaspar Kirstein; Rowe, A.

    2012-01-01

    regenerator (AMR). In particular, the effect of geometric demagnetization in the regenerator is included in a simplified manner. The model and experimental data are in good agreement while the effect of demagnetization is seen to degrade the performance. It is concluded from the experiments that both thinner...

  12. Model based monitoring of urban traffic noise : Field test results for road side and shielded sides

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Lutgendorf, D.; Wessels, P.W.; Basten, T.G.H.

    2012-01-01

    Urban traffic noise can be a major issue for people and (local) governments. On a local scale the use of measurements is increasing, especially when measures or changes to the local infrastructure are proposed. However, measuring (only) urban traffic noise is a challenging task. By using a model

  13. An Investigation Of The Influence Of Leadership And Processes On Basic Performance Results Using A Decision Model Based On Efqm

    Directory of Open Access Journals (Sweden)

    Ahmet Talat İnan

    2013-06-01

    Full Text Available EFQM Excellence Model is a quality approach that companies benefit in achieving success. EFQM Excellence Model is an assessment tool helping to determine what is competence and missing aspects in achieving excellence.In this study, based on the EFQM Excellence Model, the influence of basic performance results caused by leadership and processes variables in this model of a firm engaged in maintenance and repair services due to a large-scale company. In this work, a survey was conducted that covering the company's employees and managers. The data obtained from this survey was utilized by using SPSS16.0 statistics software in respect of factor analysis, reliability analysis, correlation and regression analysis. The relation between variables was evaluated taking into account the resuşts of analysis.

  14. Preliminary Modelling Results for an Otto Cycle/Stirling Cycle Hybrid-engine-based Power Generation System

    OpenAIRE

    Cullen, Barry; McGovern, Jim; Feidt, Michel; Petrescu, Stoian

    2009-01-01

    This paper presents preliminary data and results for a system mathematical model for a proposed Otto Cycle / Stirling Cycle hybrid-engine-based power generation system. The system is a combined cycle system with the Stirling cycle machine operating as a bottoming cycle on the Otto cycle exhaust. The application considered is that of a stationary power generation scenario wherein the Stirling cycle engine operates as a waste heat recovery device on the exhaust stream of the Otto cycle engine. ...

  15. RETRACTED — Simple and efficient ANN model proposed for the temperature dependence of EDFA gain based on experimental results

    Science.gov (United States)

    Yucel, Murat; Celebi, Fatih V.; Haldun Goktas, H.

    2013-02-01

    This study deals with the Artificial Neural Network (ANN) model of erbium-doped fiber amplifier (EDFA) gain in C band based on our experimental results at the temperature range of 0-60 °C. An ANN with three inputs and one output is considered where the inputs are signal power, wavelength, temperature and the output is EDFA gain. The network parameters are optimized by monitoring mean square error (MSE) at the output. The proposed dynamic model tremendously reduces the computational in the order of milliseconds which computes the EDFA gain at different operating conditions and is in very good agreement with our experimental findings.

  16. SAT-MAP-CLIMATE project results[SATellite base bio-geophysical parameter MAPping and aggregation modelling for CLIMATE models

    Energy Technology Data Exchange (ETDEWEB)

    Bay Hasager, C.; Woetmann Nielsen, N.; Soegaard, H.; Boegh, E.; Hesselbjerg Christensen, J.; Jensen, N.O.; Schultz Rasmussen, M.; Astrup, P.; Dellwik, E.

    2002-08-01

    Earth Observation (EO) data from imaging satellites are analysed with respect to albedo, land and sea surface temperatures, land cover types and vegetation parameters such as the Normalized Difference Vegetation Index (NDVI) and the leaf area index (LAI). The observed parameters are used in the DMI-HIRLAM-D05 weather prediction model in order to improve the forecasting. The effect of introducing actual sea surface temperatures from NOAA AVHHR compared to climatological mean values, shows a more pronounced land-sea breeze effect which is also observable in field observations. The albedo maps from NOAA AVHRR are rather similar to the climatological mean values so for the HIRLAM model this is insignicant, yet most likely of some importance in the HIRHAM regional climate model. Land cover type maps are assigned local roughness values determined from meteorological field observations. Only maps with a spatial resolution around 25 m can adequately map the roughness variations of the typical patch size distribution in Denmark. A roughness map covering Denmark is aggregated (ie area-average non-linearly) by a microscale aggregation model that takes the non-linear turbulent responses of each roughness step change between patches in an arbitrary pattern into account. The effective roughnesses are calculated into a 15 km by 15 km grid for the HIRLAM model. The effect of hedgerows is included as an added roughness effect as a function of hedge density mapped from a digital vector map. Introducing the new effective roughness maps into the HIRLAM model appears to remedy on the seasonal wind speed bias over land and sea in spring. A new parameterisation on the effective roughness for scalar surface fluxes is developed and tested on synthetic data. Further is a method for the estimation the evapotranspiration from albedo, surface temperatures and NDVI succesfully compared to field observations. The HIRLAM predictions of water vapour at 12 GMT are used for atmospheric correction of

  17. Uncertainty assessment and sensitivity analysis of soil moisture based on model parameter errors - Results from four regions in China

    Science.gov (United States)

    Sun, Guodong; Peng, Fei; Mu, Mu

    2017-12-01

    Model parameter errors are an important cause of uncertainty in soil moisture simulation. In this study, a conditional nonlinear optimal perturbation related to parameter (CNOP-P) approach and a sophisticated land surface model (the Common Land Model, CoLM) are employed in four regions in China to explore extent of uncertainty in soil moisture simulations due to model parameter errors. The CNOP-P approach facilitates calculation of the upper bounds of uncertainty due to parameter errors and investigation of the nonlinear effects of parameter combination on uncertainties in simulation and prediction. The range of uncertainty for simulated soil moisture was found to be from 0.04 to 0.58 m3 m-3. Based on the CNOP-P approach, a new approach is applied to explore a relatively sensitive and important parameter combination for soil moisture simulations and predictions. It is found that the relatively sensitive parameter combination is region- and season-dependent. Furthermore, the results show that simulation of soil moisture could be improved if the errors in these important parameter combinations are reduced. In four study regions, the average extent of improvement (61.6%) in simulating soil moisture using the new approach based on the CNOP-P is larger than that (53.4%) using the one-at-a-time (OAT) approach. These results indicate that simulation and prediction of soil moisture is improved by considering the nonlinear effects of important physical parameter combinations. In addition, the new approach based on the CNOP-P is found to be an effective method to discern the nonlinear effects of important physical parameter combinations on numerical simulation and prediction.

  18. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  19. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  20. Amazon Forest Response to Changes in Rainfall Regime: Results from an Individual-Based Dynamic Vegetation Model

    Science.gov (United States)

    Longo, Marcos

    The Amazon is the largest tropical rainforest in the world, and thus plays a major role on global water, energy, and carbon cycles. However, it is still unknown how the Amazon forest will respond to the ongoing changes in climate, especially droughts, which are expected to become more frequent. To help answering this question, in this thesis I developed and improved the representation of biophysical processes and photosynthesis in the Ecosystem Demography model (ED-2.2), an individual-based land ecosystem model. I also evaluated the model biophysics against multiple data sets for multiple forest and savannah sites in tropical South America. Results of this comparison showed that ED-2.2 is able to represent the radiation and water cycles, but exaggerates heterotrophic respiration seasonality. Also, the model generally predicted correct distribution of biomass across different areas, although it overestimated biomass in subtropical savannahs. To evaluate the forest resilience to droughts, I used ED-2.2 to simulate the plant community dynamics at two sites in Eastern Amazonia, and developed scenarios by resampling observed annual rainfall but increasing the probability of selecting dry years. While the model predicted little response at French Guiana, results at the mid-Eastern Amazonia site indicated substantial biomass loss at modest rainfall reductions. Also, the response to drier climate varied within the plant community, with evergreen, early-successional, and larger trees being the most susceptible. The model also suggests that competition for water during prolonged periods of drought caused the largest impact on larger trees, when insufficient wet season rainfall did not recharge deeper soil layers. Finally, results suggested that a decrease in return period of long-lasting droughts could prevent ecosystem recovery. Using different rainfall datasets, I defined vulnerability based on the change in climate needed to reduce the return period of long droughts. The

  1. Using Evidence Based Practice in LIS Education: Results of a Test of a Communities of Practice Model

    Directory of Open Access Journals (Sweden)

    Joyce Yukawa

    2010-03-01

    Full Text Available Objective ‐ This study investigated the use of a communities of practice (CoP model for blended learning in library and information science (LIS graduate courses. The purposes were to: (1 test the model’s efficacy in supporting student growth related to core LIS concepts, practices, professional identity, and leadership skills, and (2 develop methods for formative and summative assessment using the model.Methods ‐ Using design‐based research principles to guide the formative and summative assessments, pre‐, mid‐, and post‐course questionnaires were constructed to test the model and administered to students in three LIS courses taught by the author. Participation was voluntary and anonymous. A total of 34 students completed the three courses; response rate for the questionnaires ranged from 47% to 95%. The pre‐course questionnaire addressed attitudes toward technology and the use of technology for learning. The mid‐course questionnaire addressed strengths and weaknesses of the course and suggestions for improvement. The post‐course questionnaire addressed what students valued about their learning and any changes in attitude toward technology for learning. Data were analyzed on three levels. Micro‐level analysis addressed technological factors related to usability and participant skills and attitudes. Meso‐level analysis addressed social and pedagogical factors influencing community learning. Macro‐level analysis addressed CoP learning outcomes, namely, knowledge of core concepts and practices, and the development of professional identity and leadership skills.Results ‐ The students can be characterized as adult learners who were neither early nor late adopters of technology. At the micro‐level, responses indicate that the online tools met high standards of usability and effectively supported online communication and learning. Moreover, the increase in positive attitudes toward the use of technology for learning at

  2. Model-theoretic Optimization Approach to Triathlon Performance Under Comparative Static Conditions – Results Based on The Olympic Games 2012

    Directory of Open Access Journals (Sweden)

    Michael Fröhlich

    2013-10-01

    Full Text Available In Olympic-distance triathlon, time minimization is the goal in all three disciplines and the two transitions. Running is the key to winning, whereas swimming and cycling performance are less significantly associated with overall competition time. A comparative static simulation calculation based on the individual times of each discipline was done. Furthermore, the share of the discipline in the total time proved that increasing the scope of running training results in an additional performance development. Looking at the current development in triathlon and taking the Olympic Games in London 2012 as an initial basis for model-theoretic simulations of performance development, the first fact that attracts attention is that running becomes more and more the crucial variable in terms of winning a triathlon. Run times below 29:00 minutes in Olympic-distance triathlon will be decisive for winning. Currently, cycle training time is definitely overrepresented. The share of swimming is considered optimal.

  3. Results from a model of course-based undergraduate research in the first- and second-year chemistry curriculum

    Science.gov (United States)

    Weaver, Gabriela

    2014-03-01

    The Center for Authentic Science Practice in Education (CASPiE) is a project funded by the URC program of the NSF Chemistry Division. The purpose of CASPiE was to provide students in first and second year laboratory courses with authentic research experiences as a gateway to more traditional forms of undergraduate research. Each research experience is a 6- to 8-week laboratory project based on and contributing to the research work of the experiment's author through data or preparation of samples. The CASPiE program has resulted in a model for engaging students in undergraduate research early in their college careers. To date, CASPiE has provided that experience to over 6000 students at 17 different institutions. Evaluation data collected has included student surveys, interviews and longitudinal analysis of performance. We have found that students' perceptions of their understanding of the material and the discipline increase over the course of the semester, whereas they are seen to decrease in the control courses. Students demonstrate a greater ability to explain the meaning and purpose of their experimental procedures and results and provide extensions to the experimental design, compared not only to control courses but also compared to inquiry-based courses. Longitudinal analysis of grades indicates a possible benefit to performance in courses related to the discipline two and three years later. A similar implementation in biology courses has demonstrated an increase in critical thinking scores. Work supported by the National Science Foundation, Division of Chemistry.

  4. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Science.gov (United States)

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  5. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  6. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  7. A new type of climate network based on probabilistic graphical models: Results of boreal winter versus summer

    Science.gov (United States)

    Ebert-Uphoff, Imme; Deng, Yi

    2012-10-01

    In this paper we introduce a new type of climate network based on temporal probabilistic graphical models. This new method is able to distinguish between direct and indirect connections and thus can eliminate indirect connections in the network. Furthermore, while correlation-based climate networks focus on similarity between nodes, this new method provides an alternative viewpoint by focusing on information flow within the network over time. We build a prototype of this new network utilizing daily values of 500 mb geopotential height over the entire globe during the period 1948 to 2011. The basic network features are presented and compared between boreal winter and summer in terms of intra-location properties that measure local memory at a grid point and inter-location properties that quantify remote impact of a grid point. Results suggest that synoptic-scale, sub-weekly disturbances act as the main information carrier in this network and their intrinsic timescale limits the extent to which a grid point can influence its nearby locations. The frequent passage of these disturbances over storm track regions also uniquely determines the timescale of height fluctuations thus local memory at a grid point. The poleward retreat of synoptic-scale disturbances in boreal summer is largely responsible for a corresponding poleward shift of local maxima in local memory and remote impact, which is most evident in the North Pacific sector. For the NH as a whole, both local memory and remote impact strengthen from winter to summer leading to intensified information flow and more tightly-coupled network nodes during the latter period.

  8. Simulated crop yield in response to changes in climate and agricultural practices: results from a simple process based model

    Science.gov (United States)

    Caldararu, S.; Smith, M. J.; Purves, D.; Emmott, S.

    2013-12-01

    Global agriculture will, in the future, be faced with two main challenges: climate change and an increase in global food demand driven by an increase in population and changes in consumption habits. To be able to predict both the impacts of changes in climate on crop yields and the changes in agricultural practices necessary to respond to such impacts we currently need to improve our understanding of crop responses to climate and the predictive capability of our models. Ideally, what we would have at our disposal is a modelling tool which, given certain climatic conditions and agricultural practices, can predict the growth pattern and final yield of any of the major crops across the globe. We present a simple, process-based crop growth model based on the assumption that plants allocate above- and below-ground biomass to maintain overall carbon optimality and that, to maintain this optimality, the reproductive stage begins at peak nitrogen uptake. The model includes responses to available light, water, temperature and carbon dioxide concentration as well as nitrogen fertilisation and irrigation. The model is data constrained at two sites, the Yaqui Valley, Mexico for wheat and the Southern Great Plains flux site for maize and soybean, using a robust combination of space-based vegetation data (including data from the MODIS and Landsat TM and ETM+ instruments), as well as ground-based biomass and yield measurements. We show a number of climate response scenarios, including increases in temperature and carbon dioxide concentrations as well as responses to irrigation and fertiliser application.

  9. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

    2009-11-30

    Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

  10. On-line monitoring and modelling based process control of high rate nitrification - lab scale experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Pirsing, A. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Wiesmann, U. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Kelterbach, G. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Schaffranietz, U. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Roeck, H. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Eichner, B. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Szukal, S. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Schulze, G. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie

    1996-09-01

    This paper presents a new concept for the control of nitrification in highly polluted waste waters. The approach is based on mathematical modelling. To determine the substrate degradation rates of the microorganisms involved, a mathematical model using gas measurement is used. A fuzzy-controller maximises the capacity utilisation efficiencies. The experiments carried out in a lab-scale reactor demonstrate that even with highly varying ammonia concentrations in the influent, the nitrogen concentrations in the effluent can be kept within legal limits. (orig.). With 11 figs.

  11. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    International Nuclear Information System (INIS)

    Bordogna, Clelia Maria; Albano, Ezequiel V

    2007-01-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work

  12. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  13. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Science.gov (United States)

    Gutiérrez, Alvaro G; Armesto, Juan J; Díaz, M Francisca; Huth, Andreas

    2014-01-01

    Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  14. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Directory of Open Access Journals (Sweden)

    Alvaro G Gutiérrez

    Full Text Available Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S. The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area. We compared the responses of a young stand (YS, ca. 60 years-old and an old-growth forest (OG, >500 years-old in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  15. Spatial organization of mesenchymal stem cells in vitro--results from a new individual cell-based model with podia.

    Directory of Open Access Journals (Sweden)

    Martin Hoffmann

    Full Text Available Therapeutic application of mesenchymal stem cells (MSC requires their extensive in vitro expansion. MSC in culture typically grow to confluence within a few weeks. They show spindle-shaped fibroblastoid morphology and align to each other in characteristic spatial patterns at high cell density. We present an individual cell-based model (IBM that is able to quantitatively describe the spatio-temporal organization of MSC in culture. Our model substantially improves on previous models by explicitly representing cell podia and their dynamics. It employs podia-generated forces for cell movement and adjusts cell behavior in response to cell density. At the same time, it is simple enough to simulate thousands of cells with reasonable computational effort. Experimental sheep MSC cultures were monitored under standard conditions. Automated image analysis was used to determine the location and orientation of individual cells. Our simulations quantitatively reproduced the observed growth dynamics and cell-cell alignment assuming cell density-dependent proliferation, migration, and morphology. In addition to cell growth on plain substrates our model captured cell alignment on micro-structured surfaces. We propose a specific surface micro-structure that according to our simulations can substantially enlarge cell culture harvest. The 'tool box' of cell migratory behavior newly introduced in this study significantly enhances the bandwidth of IBM. Our approach is capable of accommodating individual cell behavior and collective cell dynamics of a variety of cell types and tissues in computational systems biology.

  16. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  17. Three-dimensional model of plate geometry and velocity model for Nankai Trough seismogenic zone based on results from structural studies

    Science.gov (United States)

    Nakanishi, A.; Shimomura, N.; Kodaira, S.; Obana, K.; Takahashi, T.; Yamamoto, Y.; Yamashita, M.; Takahashi, N.; Kaneda, Y.

    2012-12-01

    In the Nankai Trough subduction seismogenic zone, the Nankai and Tonankai earthquakes had often occurred simultaneously, and caused a great event. In order to reduce a great deal of damage to coastal area from both strong ground motion and tsunami generation, it is necessary to understand rupture synchronization and segmentation of the Nankai megathrust earthquake. For a precise estimate of the rupture zone of the Nankai megathrust event based on the knowledge of realistic earthquake cycle and variation of magnitude, it is important to know the geometry and property of the plate boundary of the subduction seismogenic zone. To improve a physical model of the Nankai Trough seismogenic zone, the large-scale high-resolution wide-angle and reflection (MCS) seismic study, and long-term observation has been conducted since 2008. Marine active source seismic data have been acquired along grid two-dimensional profiles having the total length of ~800km every year. A three-dimensional seismic tomography using active and passive seismic data observed both land and ocean bottom stations have been also performed. From those data, we found that several strong lateral variations of the subducting Philippine Sea plate and overriding plate corresponding to margins of coseismic rupture zone of historical large event occurred along the Nankai Trough. Particularly a possible prominent reflector for the forearc Moho is recently imaged in the offshore side in the Kii channel at the depth of ~18km which is shallower than those of other area along the Nankai Trough. Such a drastic variation of the overriding plate might be related to the existence of the segmentation of the Nankai megathrust earthquake. Based on our results derived from seismic studies, we have tried to make a geometrical model of the Philippine Sea plate and a three-dimensional velocity structure model of the Nankai Trough seismogenic zone. In this presentation, we will summarize major results of out seismic studies, and

  18. Result-Based Public Governance

    DEFF Research Database (Denmark)

    Boll, Karen

    Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution-produced ef......Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution...... the performance measure that guides the inspectors’ inspection (or nudging) of the businesses. The analysis shows that although a result-based governance system is advocated on a strategic level, performance measures which are not ‘result-based’ are developed and used in the daily coordination of work. The paper...... explores how and why this state of affairs appears and problematizes the widespread use of result-based governance and nudging-techniques by public sector institutions....

  19. Benefits of using customized instrumentation in total knee arthroplasty: results from an activity-based costing model.

    Science.gov (United States)

    Tibesku, Carsten O; Hofer, Pamela; Portegies, Wesley; Ruys, C J M; Fennema, Peter

    2013-03-01

    The growing demand for total knee arthroplasty (TKA) associated with the efforts to contain healthcare expenditure by advanced economies necessitates the use of economically effective technologies in TKA. The present analysis based on activity-based costing (ABC) model was carried out to estimate the economic value of patient-matched instrumentation (PMI) compared to standard surgical instrumentation in TKA. The costs of the two approaches, PMI and standard instrumentation in TKA, were determined by the use of ABC which measures the cost of a particular procedure by determining the activities involved and adding the cost of each activity. Improvement in productivity due to increased operating room (OR) turn-around times was determined and potential additional revenue to the hospital by the efficient utilization of gained OR time was estimated. Increased efficiency in the usage of OR and utilization of surgical trays were noted with patient-specific approach. Potential revenues to the hospital were estimated with the use of PMI by efficient utilization of time saved in OR. Additional revenues of 78,240 per year were estimated considering utilization of gained OR time to perform surgeries other than TKA. The analysis suggests that use of PMI in TKA is economically effective when compared to standard instrumentation.

  20. DESIGN OF LOW CYTOTOXICITY DIARYLANILINE DERIVATIVES BASED ON QSAR RESULTS: AN APPLICATION OF ARTIFICIAL NEURAL NETWORK MODELLING

    Directory of Open Access Journals (Sweden)

    Ihsanul Arief

    2016-11-01

    Full Text Available Study on cytotoxicity of diarylaniline derivatives by using quantitative structure-activity relationship (QSAR has been done. The structures and cytotoxicities of  diarylaniline derivatives were obtained from the literature. Calculation of molecular and electronic parameters was conducted using Austin Model 1 (AM1, Parameterized Model 3 (PM3, Hartree-Fock (HF, and density functional theory (DFT methods.  Artificial neural networks (ANN analysis used to produce the best equation with configuration of input data-hidden node-output data = 5-8-1, value of r2 = 0.913; PRESS = 0.069. The best equation used to design and predict new diarylaniline derivatives.  The result shows that compound N1-(4′-Cyanophenyl-5-(4″-cyanovinyl-2″,6″-dimethyl-phenoxy-4-dimethylether benzene-1,2-diamine is the best-proposed compound with cytotoxicity value (CC50 of 93.037 μM.

  1. Blast-cooling of beef-in-sauce catering meals: numerical results based on a dynamic zero-order model

    Directory of Open Access Journals (Sweden)

    Jose A. Rabi

    2014-10-01

    Full Text Available Beef-in-sauce catering meals under blast-cooling have been investigated in a research project which aims at quantitative HACCP (hazard analysis critical control point. In view of its prospective coupling to a predictive microbiology model proposed in the project, zero-order spatial dependence has proved to suitably predict meal temperatures in response to temperature variations in the cooling air. This approach has modelled heat transfer rates via the a priori unknown convective coefficient hc which is allowed to vary due to uncertainty and variability in the actual modus operandi of the chosen case study hospital kitchen. Implemented in MS Excel®, the numerical procedure has successfully combined the 4th order Runge-Kutta method, to solve the governing equation, with non-linear optimization, via the built-in Solver, to determine the coefficient hc. In this work, the coefficient hc was assessed for 119 distinct recently-cooked meal samples whose temperature-time profiles were recorded in situ after 17 technical visits to the hospital kitchen over a year. The average value and standard deviation results were hc = 12.0 ± 4.1 W m-2 K-1, whilst the lowest values (associated with the worst cooling scenarios were about hc » 6.0 W m-2 K-1.

  2. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  3. Tundra shrubification and tree-line advance amplify arctic climate warming: results from an individual-based dynamic vegetation model

    International Nuclear Information System (INIS)

    Zhang Wenxin; Miller, Paul A; Smith, Benjamin; Wania, Rita; Koenigk, Torben; Döscher, Ralf

    2013-01-01

    One major challenge to the improvement of regional climate scenarios for the northern high latitudes is to understand land surface feedbacks associated with vegetation shifts and ecosystem biogeochemical cycling. We employed a customized, Arctic version of the individual-based dynamic vegetation model LPJ-GUESS to simulate the dynamics of upland and wetland ecosystems under a regional climate model–downscaled future climate projection for the Arctic and Subarctic. The simulated vegetation distribution (1961–1990) agreed well with a composite map of actual arctic vegetation. In the future (2051–2080), a poleward advance of the forest–tundra boundary, an expansion of tall shrub tundra, and a dominance shift from deciduous to evergreen boreal conifer forest over northern Eurasia were simulated. Ecosystems continued to sink carbon for the next few decades, although the size of these sinks diminished by the late 21st century. Hot spots of increased CH 4 emission were identified in the peatlands near Hudson Bay and western Siberia. In terms of their net impact on regional climate forcing, positive feedbacks associated with the negative effects of tree-line, shrub cover and forest phenology changes on snow-season albedo, as well as the larger sources of CH 4 , may potentially dominate over negative feedbacks due to increased carbon sequestration and increased latent heat flux. (letter)

  4. Budgeting Based on Results: A Return-on-Investment Model Contributes to More Effective Annual Spending Choices

    Science.gov (United States)

    Cooper, Kelt L.

    2011-01-01

    One major problem in developing school district budgets immune to cuts is the model administrators traditionally use--an expenditure model. The simplicity of this model is seductive: What were the revenues and expenditures last year? What are the expected revenues and expenditures this year? A few adjustments here and there and one has a budget.…

  5. Results of Dose-adapted Salvage Radiotherapy After Radical Prostatectomy Based on an Endorectal MRI Target Definition Model.

    Science.gov (United States)

    Zilli, Thomas; Jorcano, Sandra; Peguret, Nicolas; Caparrotti, Francesca; Hidalgo, Alberto; Khan, Haleem G; Vees, Hansjörg; Miralbell, Raymond

    2017-04-01

    To assess the outcome of patients treated with a dose-adapted salvage radiotherapy (SRT) protocol based on an endorectal magnetic resonance imaging (erMRI) failure definition model after radical prostatectomy (RP). We report on 171 relapsing patients after RP who had undergone an erMRI before SRT. 64 Gy were prescribed to the prostatic bed with, in addition, a boost of 10 Gy to the suspected local relapse as detected on erMRI in 131 patients (76.6%). The 3-year biochemical relapse-free survival (bRFS), local relapse-free survival, distant metastasis-free survival, cancer-specific survival, and overall survival were 64.2±4.3%, 100%, 85.2±3.2%, 100%, and 99.1±0.9%, respectively. A PSA value >1 ng/mL before salvage (P=0.006) and an absence of biochemical progression during RT (P=0.001) were both independently correlated with bRFS on multivariate analysis. No significant difference in 3-year bRFS was observed between the boost and no-boost groups (68.4±4.6% vs. 49.7±10%, P=0.251). A PSA value >1 ng/mL before salvage and a biochemical progression during RT were both independently correlated with worse bRFS after SRT. By using erMRI to select patients who are most likely expected to benefit from dose-escalated SRT protocols, this dose-adapted SRT approach was associated with good biochemical control and outcome, serving as a hypothesis-generating basis for further prospective trials aimed at improving the therapeutic ratio in the salvage setting.

  6. Hyaluronan-based scaffold for in vivo regeneration of the rat vena cava: Preliminary results in an animal model.

    Science.gov (United States)

    Pandis, Laura; Zavan, Barbara; Abatangelo, Giovanni; Lepidi, Sandro; Cortivo, Roberta; Vindigni, Vincenzo

    2010-06-15

    The aim of this study was to develop a prosthetic graft that could perform as a small-diameter vascular conduit for vein regeneration. The difficulty of obtaining significant long-term patency and good wall mechanical strength in vivo has been a significant obstacle in achieving small-diameter vein prostheses. Fifteen Male Wistar rats weighing 250-350 g were used. Tubular structures of hyaluronan (HYAFF-11 tubules, 2 mm diameter, and 1.5 cm length) were implanted in the vena cava of rats as temporary absorbable guides to promote regeneration of veins. Performance was assessed at 30, 60, and 90 days after surgery by histology (hematoxylin-eosin and Weighert solution) and immunohistochemistry (antibodies to von Willebrand factor and to Myosin Light-Chain Kinase). These experiments resulted in two novel findings: (1) sequential regeneration of vascular components led to complete vein wall regeneration 30 days after surgery; (2) the biomaterial used created the ideal environment for the delicate regeneration process during the critical initial phases, yet its biodegradability allowed for complete degradation of the construct 4 months after implantation, at which time, a new vein remained to connect the vein stumps. This work demonstrates the complete vena cava regeneration inside the hyaluronic acid-based prosthesis, opening new perspective of microsurgical applications, like replantation of the upper limb, elongation of vascular pedicle of free flaps, cardiovascular surgery, and pediatric microvascular surgery. (c) 2009 Wiley Periodicals, Inc.

  7. Determination of empirical models of NOx and SO2 removal efficiency for two steps of combustion gas irradiation system basing on results obtained at EPS Kaweczyn pilot plant

    International Nuclear Information System (INIS)

    Chmielewski, A.G.; Tyminski, B.; Dobrowolski, A.; Licki, J.

    1998-01-01

    A multidimensional regression method has been applied to construct empirical models equations of NO x and SO 2 removal efficiency in e-b process for two stage irradiation system basing on results achieved for EPS Kaweczyn pilot plant. Model equations describe with satisfactory accuracy experimental results, therefore obtained model equations can be used for prediction of NO x and SO 2 removal efficiency in e-b process during two stages irradiation of flue gases particularly in case of scale-up. (author)

  8. Assessment of offshore wind power potential in the Aegean and Ionian Seas based on high-resolution hindcast model results

    Directory of Open Access Journals (Sweden)

    Takvor Soukissian

    2017-03-01

    Full Text Available In this study long-term wind data obtained from high-resolution hindcast simulations is used to analytically assess offshore wind power potential in the Aegean and Ionian Seas and provide wind climate and wind power potential characteristics at selected locations, where offshore wind farms are at the concept/planning phase. After ensuring the good model performance through detailed validation against buoy measurements, offshore wind speed and wind direction at 10 m above sea level are statistically analyzed on the annual and seasonal time scale. The spatial distribution of the mean wind speed and wind direction are provided in the appropriate time scales, along with the mean annual and the inter-annual variability; these statistical quantities are useful in the offshore wind energy sector as regards the preliminary identification of favorable sites for exploitation of offshore wind energy. Moreover, the offshore wind power potential and its variability are also estimated at 80 m height above sea level. The obtained results reveal that there are specific areas in the central and the eastern Aegean Sea that combine intense annual winds with low variability; the annual offshore wind power potential in these areas reach values close to 900 W/m2, suggesting that a detailed assessment of offshore wind energy would be worth noticing and could lead in attractive investments. Furthermore, as a rough estimate of the availability factor, the equiprobable contours of the event [4 m/s ≤ wind speed ≤ 25 m/s] are also estimated and presented. The selected lower and upper bounds of wind speed correspond to typical cut-in and cut-out wind speed thresholds, respectively, for commercial offshore wind turbines. Finally, for seven offshore wind farms that are at the concept/planning phase the main wind climate and wind power density characteristics are also provided.

  9. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances.A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  10. Increased Drought Impacts on Temperate Rainforests from Southern South America: Results of a Process-Based, Dynamic Forest Model

    OpenAIRE

    Gutiérrez, Alvaro G.; Armesto, Juan J.; Díaz, M. Francisca; Huth, Andreas

    2014-01-01

    Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incor...

  11. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  12. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  13. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamson, S. [Wisconsin Univ., Madison, WI (United States); Bender, M.A. [Brookhaven National Lab., Upton, NY (United States); Boecker, B.B.; Scott, B.R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States). Inhalation Toxicology Research Inst.; Gilbert, E.S. [Pacific Northwest Lab., Richland, WA (United States)

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  14. Overview of fuel behaviour and core degradation, based on modelling analyses. Overview of fuel behaviour and core degradation, on the basis of modelling results

    International Nuclear Information System (INIS)

    Massara, Simone

    2013-01-01

    Since the very first hours after the accident at Fukushima-Daiichi, numerical simulations by means of severe accident codes have been carried out, aiming at highlighting the key physical phenomena allowing a correct understanding of the sequence of events, and - on a long enough timeline - improving models and methods, in order to reduce the discrepancy between calculated and measured data. A last long-term objective is to support the future decommissioning phase. The presentation summarises some of the available elements on the role of the fuel/cladding-water interaction, which became available only through modelling because of the absence of measured data directly related to the cladding-steam interaction. This presentation also aims at drawing some conclusions on the status of the modelling capabilities of current tools, particularly for the purpose of the foreseen application to ATF fuels: - analyses with MELCOR, MAAP, THALES2 and RELAP5 are presented; - input data are taken from BWR Mark-I Fukushima-Daiichi Units 1, 2 and 3, completed with operational data published by TEPCO. In the case of missing or incomplete data or hypotheses, these are adjusted to reduce the calculation/measurement discrepancy. The behaviour of the accident is well understood on a qualitative level (major trends on RPV pressure and water level, dry-wet and PCV pressure are well represented), allowing a certain level of confidence in the results of the analysis of the zirconium-steam reaction - which is accessible only through numerical simulations. These show an extremely fast sequence of events (here for Unit 1): - the top of fuel is uncovered in 3 hours (after the tsunami); - the steam line breaks at 6.5 hours. Vessel dries at 10 hours, with a heat-up rate in a first moment driven by the decay heat only (∼7 K/min) and afterwards by the chemical heat from Zr-oxidation (over 30 K/min), associated with massive hydrogen production. It appears that the level of uncertainty increases with

  15. Effects of earthquakes on the deep repository for spent fuel in Sweden based on case studies and preliminary model results

    International Nuclear Information System (INIS)

    Baeckblom, Goeran; Munier, Raymond

    2002-06-01

    their original values within a few months. The density of the buffer around the canister is high enough to prevent liquefaction due to shaking. The predominant brittle deformation of a rock mass will be reactivation of pre- existing fractures. The ata emanating from faults intersecting tunnels show that creation of new fractures is confined to the immediate vicinity of the reactivated faults and that deformation in host rock is rapidly decreasing with the distance from the fault. By selection of appropriate respect distances the probability of canister damage due to faulting is further lowered. Data from deep South African mines show that rocks in a environment with non- existing faults and with low fracture densities and high stresses might generate faults in a previously unfractured rock mass. The Swedish repository will be located in fractured bedrock, at intermediate depth, 400 - 700 m, where stresses are moderate. The conditions to create these peculiar mining-induced features will not prevail in the repository environment. Should these faults anyhow be created, the canister is designed to withstand a shear deformation of at least 0.1 m. This corresponds to a magnitude 6 earthquake along the fault with a length of at least 1 km which is highly unlikely. Respect distance has to be site and fault specific. Field evidence gathered in this study indicates that respect distances may be considerably smaller (tens to hundreds of m) than predicted by numerical modelling (thousands of m). However, the accumulated deformation during repeated, future seismic event has to be accounted for

  16. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  17. First Results of using the Process-based Model PROMAB-GIS for Runoff a Bedload Transport Estimation in the Lainbach Torrent Catchment Area (Benediktbeuern, Germany)

    Science.gov (United States)

    Rinderer, M.; Jenewein, S.; Ploner, A.; Sönser, T.

    2003-04-01

    As growing damage potential makes society more and more vulnerable to natural hazards, the pressure on the official authorities responsible for the guarantee of public safety is increasing rapidly. Modern computer technology, e.g. Geographical Information Systems (GIS), can provide remarkable help in assessing the risks resulting from natural hazards. The modelling in PROMAB-GIS, which is an user friendly software based on ESRI ArcView for assessing runoff and bedload transport in torrent catchments, is strongly based on interdisciplinary process-orientated field investigations. This paper presents results of the application of PROMAB-GIS to estimate the runoff and bedload transport potential of the Lainbach catchment area which has repeatedly been affected by heavy rain storms triggering remarkable events. The operational steps needed to gain process orientated, reproducible results for assessing design events in watersheds are highlighted. A key issue in this context is the need for detailed field-investigation of the geo-, bio-, hydro-inventory of a catchment area. The second part of the paper presents the model results for design events. The data of the event which caused severe damage in June 1990 provides a perfect basis for the evaluation of the model. The results show the potential of PROMAB-GIS for assessing runoff and bedload transport in alpine torrent systems.

  18. EFFECTS OF COOPERATIVE LEARNING MODEL TYPE STAD JUST-IN TIME BASED ON THE RESULTS OF LEARNING TEACHING PHYSICS COURSE IN PHYSICS SCHOOL IN PHYSICS PROGRAM FACULTY UNIMED

    Directory of Open Access Journals (Sweden)

    Teguh Febri Sudarma

    2013-06-01

    Full Text Available Research was aimed to determine: (1 Students’ learning outcomes that was taught with just in time teaching based STAD cooperative learning method and STAD cooperative learning method (2 Students’ outcomes on Physics subject that had high learning activity compared with low learning activity. The research sample was random by raffling four classes to get two classes. The first class taught with just in time teaching based STAD cooperative learning method, while the second class was taught with STAD cooperative learning method. The instrument used was conceptual understanding that had been validated with 7 essay questions. The average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,47 higher than average gain values of students learning results with STAD cooperative learning method. The high learning activity and low learning activity gave different learning results. In this case the average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,48 higher than average gain values of students learning results with STAD cooperative learning method. There was interaction between learning model and learning activity to the physics learning result test in students

  19. Estimating the predictive ability of genetic risk models in simulated data based on published results from genome-wide association studies.

    Science.gov (United States)

    Kundu, Suman; Mihaescu, Raluca; Meijer, Catherina M C; Bakker, Rachel; Janssens, A Cecile J W

    2014-01-01

    There is increasing interest in investigating genetic risk models in empirical studies, but such studies are premature when the expected predictive ability of the risk model is low. We assessed how accurately the predictive ability of genetic risk models can be estimated in simulated data that are created based on the odds ratios (ORs) and frequencies of single-nucleotide polymorphisms (SNPs) obtained from genome-wide association studies (GWASs). We aimed to replicate published prediction studies that reported the area under the receiver operating characteristic curve (AUC) as a measure of predictive ability. We searched GWAS articles for all SNPs included in these models and extracted ORs and risk allele frequencies to construct genotypes and disease status for a hypothetical population. Using these hypothetical data, we reconstructed the published genetic risk models and compared their AUC values to those reported in the original articles. The accuracy of the AUC values varied with the method used for the construction of the risk models. When logistic regression analysis was used to construct the genetic risk model, AUC values estimated by the simulation method were similar to the published values with a median absolute difference of 0.02 [range: 0.00, 0.04]. This difference was 0.03 [range: 0.01, 0.06] and 0.05 [range: 0.01, 0.08] for unweighted and weighted risk scores. The predictive ability of genetic risk models can be estimated using simulated data based on results from GWASs. Simulation methods can be useful to estimate the predictive ability in the absence of empirical data and to decide whether empirical investigation of genetic risk models is warranted.

  20. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  1. Complementing data-driven and physically-based approaches for predictive morphologic modeling: Results and implication from the Red River Basin, Vietnam

    Science.gov (United States)

    Schmitt, R. J.; Bernardi, D.; Bizzi, S.; Castelletti, A.; Soncini-Sessa, R.

    2013-12-01

    During the last 30 years, the delta of the Red River (Song Hong) in northern Vietnam experienced grave morphologic degradation processes which severely impact economic activities and endanger region-wide livelihoods. Rapidly progressing river bed incision, for example, threatens the irrigation of the delta's paddy rice crops which constitute 20% of Vietnam's annual rice production. Morphologic alteration is related to a drastically changed sediment balance due to major upstream impoundments, sediment mining and land use changes, further aggravated by changing hydro-meteorological conditions. Despite the severe impacts, river morphology was so far not included into the current efforts to optimize basin wide water resource planning for a lack of suitable, not overly resource demanding modeling strategies. This paper assesses the suitability of data-driven models to provide insights into complex hydromorphologic processes and to complement and enrich physically-based modeling strategies. Hence, to identify key drivers of morphological change while evaluating impacts of future socio-economic, management and climate scenarios on river morphology and the resulting effects on key social needs (e.g. water supply, energy production and flood mitigation). Most relevant drivers and time-scales for the considered processes (e.g. incision) - from days to decades - were identified from hydrologic and sedimentologic time-series using a feature ranking algorithm based on random trees. The feature ranking pointed out bimodal response characteristics, with important contributions of long-to-medium (5 - 15 yrs.) and rather short (10d - 6 months) timescales. An artificial neural network (ANN), built from identified variables, subsequently quantified in detail how these temporal components control long term trends, inter-seasonal fluctuations and day to day variations in morphologic processes. Whereas the general trajectory of incision relates, for example, to the overall regional

  2. Ocean EcoSystem Modelling Based on Observations from Satellite and In-Situ Data: First Results from the OSMOSIS Project

    Science.gov (United States)

    Rio, M.-H.; Buongiorno-Nardelli, B.; Calmettes, B.; Conchon, A.; Droghei, R.; Guinehut, S.; Larnicol, G.; Lehodey, P.; Matthieu, P. P.; Mulet, S.; Santoleri, R.; Senina, I.; Stum, J.; Verbrugge, N.

    2015-12-01

    Micronekton organisms are both the prey of large ocean predators, and themselves also the predators of eggs and larvae of many species from which most fishes. The micronekton biomass concentration is therefore a key explanatory variable that is usually missing in fish population and ecosystem models to understand individual behaviour and population dynamics of large oceanic predators. In that context, the OSMOSIS (Ocean ecoSystem Modelling based on Observations from Satellite and In-Situ data) ESA project aims at demonstrating the feasibility and prototyping an integrated system going from the synergetic use of many different variables measured from space to the modelling of the distribution of micronektonic organisms. In this paper, we present how data from CRYOSAT, GOCE, SMOS, ENVISAT, together with other non-ESA satellites and in-situ data, can be merged to provide the required key variables needed as input of the micronekton model. Also, first results from the optimization of the micronekton model are presented and discussed.

  3. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK

    Science.gov (United States)

    Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.

    2014-12-01

    Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.

  4. Assessing knowledge ambiguity in the creation of a model based on expert knowledge and comparison with the results of a landscape succession model in central Labrador. Chapter 10.

    Science.gov (United States)

    Frederik Doyon; Brian Sturtevant; Michael J. Papaik; Andrew Fall; Brian Miranda; Daniel D. Kneeshaw; Christian Messier; Marie-Josee. Fortin; Patrick M.A. James

    2012-01-01

    Sustainable forest management (SFM) recognizes that the spatial and temporal patterns generated at different scales by natural landscape and stand dynamics processes should serve as a guide for managing the forest within its range of natural variability. Landscape simulation modeling is a powerful tool that can help encompass such complexity and support SFM planning....

  5. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  6. Anthropogenic driven modern recharge and solute flux to arid basin aquifers: Results and implications for sustainability based on field observations and computational modeling

    Science.gov (United States)

    Robertson, W. M.; Sharp, J. M.

    2013-12-01

    groundwater resources in this system based upon the trends in groundwater NO3- concentrations, vadose zone core data, and results of the net infiltration models: 1) there may be more recharge to the basins than previously estimated and 2) there is a potential long-term concern for water quality. Due to the thick unsaturated zone in much of the system, long travel times are expected between the base of the root zone and the water table. It is unclear if the flux of NO3- and Cl- to the groundwater has peaked or if effects from the alteration of the natural vegetation regime will continue for years to come.

  7. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    Science.gov (United States)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  8. Development of a patient and institutional-based model for estimation of operative times for robot-assisted radical cystectomy: results from the International Robotic Cystectomy Consortium.

    Science.gov (United States)

    Hussein, Ahmed A; May, Paul R; Ahmed, Youssef E; Saar, Matthias; Wijburg, Carl J; Richstone, Lee; Wagner, Andrew; Wilson, Timothy; Yuh, Bertram; Redorta, Joan P; Dasgupta, Prokar; Kawa, Omar; Khan, Mohammad S; Menon, Mani; Peabody, James O; Hosseini, Abolfazl; Gaboardi, Franco; Pini, Giovannalberto; Schanne, Francis; Mottrie, Alexandre; Rha, Koon-Ho; Hemal, Ashok; Stockle, Michael; Kelly, John; Tan, Wei S; Maatman, Thomas J; Poulakis, Vassilis; Kaouk, Jihad; Canda, Abdullah E; Balbay, Mevlana D; Wiklund, Peter; Guru, Khurshid A

    2017-11-01

    To design a methodology to predict operative times for robot-assisted radical cystectomy (RARC) based on variation in institutional, patient, and disease characteristics to help in operating room scheduling and quality control. The model included preoperative variables and therefore can be used for prediction of surgical times: institutional volume, age, gender, body mass index, American Society of Anesthesiologists score, history of prior surgery and radiation, clinical stage, neoadjuvant chemotherapy, type, technique of diversion, and the extent of lymph node dissection. A conditional inference tree method was used to fit a binary decision tree predicting operative time. Permutation tests were performed to determine the variables having the strongest association with surgical time. The data were split at the value of this variable resulting in the largest difference in means for the surgical time across the split. This process was repeated recursively on the resultant data sets until the permutation tests showed no significant association with operative time. In all, 2 134 procedures were included. The variable most strongly associated with surgical time was type of diversion, with ileal conduits being 70 min shorter (P 66 RARCs) was important, with those with a higher volume being 55 min shorter (P < 0.001). The regression tree output was in the form of box plots that show the median and ranges of surgical times according to the patient, disease, and institutional characteristics. We developed a method to estimate operative times for RARC based on patient, disease, and institutional metrics that can help operating room scheduling for RARC. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  9. Population Physiologically-Based Pharmacokinetic Modeling for the Human Lactational Transfer of PCB 153 with Consideration of Worldwide Human Biomonitoring Results

    Energy Technology Data Exchange (ETDEWEB)

    Redding, Laurel E.; Sohn, Michael D.; McKone, Thomas E.; Wang, Shu-Li; Hsieh, Dennis P. H.; Yang, Raymond S. H.

    2008-03-01

    We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and compare predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The model successfully describes the range of possible PCB 153 dispositions in maternal milk, suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.

  10. Testing a hydraulic trait based model of stomatal control: results from a controlled drought experiment on aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas)

    Science.gov (United States)

    Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.

    2017-12-01

    Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.

  11. A points-based algorithm for prognosticating clinical outcome of Chiari malformation Type I with syringomyelia: results from a predictive model analysis of 82 surgically managed adult patients.

    Science.gov (United States)

    Thakar, Sumit; Sivaraju, Laxminadh; Jacob, Kuruthukulangara S; Arun, Aditya Atal; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Hegde, Alangar S

    2018-01-01

    OBJECTIVE Although various predictors of postoperative outcome have been previously identified in patients with Chiari malformation Type I (CMI) with syringomyelia, there is no known algorithm for predicting a multifactorial outcome measure in this widely studied disorder. Using one of the largest preoperative variable arrays used so far in CMI research, the authors attempted to generate a formula for predicting postoperative outcome. METHODS Data from the clinical records of 82 symptomatic adult patients with CMI and altered hindbrain CSF flow who were managed with foramen magnum decompression, C-1 laminectomy, and duraplasty over an 8-year period were collected and analyzed. Various preoperative clinical and radiological variables in the 57 patients who formed the study cohort were assessed in a bivariate analysis to determine their ability to predict clinical outcome (as measured on the Chicago Chiari Outcome Scale [CCOS]) and the resolution of syrinx at the last follow-up. The variables that were significant in the bivariate analysis were further analyzed in a multiple linear regression analysis. Different regression models were tested, and the model with the best prediction of CCOS was identified and internally validated in a subcohort of 25 patients. RESULTS There was no correlation between CCOS score and syrinx resolution (p = 0.24) at a mean ± SD follow-up of 40.29 ± 10.36 months. Multiple linear regression analysis revealed that the presence of gait instability, obex position, and the M-line-fourth ventricle vertex (FVV) distance correlated with CCOS score, while the presence of motor deficits was associated with poor syrinx resolution (p ≤ 0.05). The algorithm generated from the regression model demonstrated good diagnostic accuracy (area under curve 0.81), with a score of more than 128 points demonstrating 100% specificity for clinical improvement (CCOS score of 11 or greater). The model had excellent reliability (κ = 0.85) and was validated with

  12. Burden and outcomes of pressure ulcers in cancer patients receiving the Kerala model of home based palliative care in India: Results from a prospective observational study

    Directory of Open Access Journals (Sweden)

    Biji M Sankaran

    2015-01-01

    Full Text Available Aim: To report the prevalence and outcomes of pressure ulcers (PU seen in a cohort of cancer patients requiring home-based palliative care. Materials and Methods: All patients referred for home care were eligible for this prospective observational study, provided they were living within a distance of 35 km from the institute and gave informed consent. During each visit, caregivers were trained and educated for providing nursing care for the patient. Dressing material for PU care was provided to all patients free of cost and care methods were demonstrated. Factors influencing the occurrence and healing of PUs were analyzed using logistic regression. Duration for healing of PU was calculated using the Kaplan Meier method. P < 0.05 are taken as significant. Results: Twenty-one of 108 (19.4% enrolled patients had PU at the start of homecare services. None of the patients developed new PU during the course of home care. Complete healing of PU was seen in 9 (42.9% patients. The median duration for healing of PU was found to be 56 days. Median expenditure incurred in patients with PU was Rs. 2323.40 with a median daily expenditure of Rs. 77.56. Conclusions: The present model of homecare service delivery was found to be effective in the prevention and management of PUs. The high prevalence of PU in this cohort indicates a need for greater awareness for this complication. Clinical Trial Registry Number: CTRI/2014/03/004477

  13. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...

  14. Evaluation of a comprehensive EHR based on the DeLone and McLean model for IS success: approach, results, and success factors.

    Science.gov (United States)

    Bossen, Claus; Jensen, Lotte Groth; Udsen, Flemming Witt

    2013-10-01

    difficult, but was required because a key role was to inform decision-making upon enrollment at other hospitals and systematically identify barriers in this respect. The strength of the evaluation is the mixed-methods approach. Further, the evaluation was based on assessments from staff in two departments that comprise around 50% of hospital staff. A weakness may be that staff assessment plays a major role in interviews and survey. These though are supplemented by performance data and observation. Also, the evaluation primarily reports upon the dimension 'user satisfaction', since use of the EHR is mandatory. Finally, generalizability may be low, since the evaluation was not based on a validated survey. All in all, however, the evaluation proposes an evaluation design in constrained circumstances. Despite inherent limitations, evaluation of a comprehensive EHR shortly after implementation may be necessary, can be conducted, and may inform political decision making. The updated DeLone and McLean framework was constructive in the overall design of the evaluation of the EHR implementation, and allowed the model to be adapted to the health care domain by being methodological flexible. The mixed-methods case study produced valid and reliable results, and was accepted by staff, system providers, and political decision makers. The successful implementation may be attributed to the configurability of the EHR and to factors such as an experienced, competent implementation organization at the hospital, upgraded soft- and hardware, and a high degree of user involvement. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  16. Incorporating spatial dose metrics in machine learning-based normal tissue complication probability (NTCP) models of severe acute dysphagia resulting from head and neck radiotherapy.

    Science.gov (United States)

    Dean, Jamie; Wong, Kee; Gay, Hiram; Welsh, Liam; Jones, Ann-Britt; Schick, Ulricke; Oh, Jung Hun; Apte, Aditya; Newbold, Kate; Bhide, Shreerang; Harrington, Kevin; Deasy, Joseph; Nutting, Christopher; Gulliford, Sarah

    2018-01-01

    Severe acute dysphagia commonly results from head and neck radiotherapy (RT). A model enabling prediction of severity of acute dysphagia for individual patients could guide clinical decision-making. Statistical associations between RT dose distributions and dysphagia could inform RT planning protocols aiming to reduce the incidence of severe dysphagia. We aimed to establish such a model and associations incorporating spatial dose metrics. Models of severe acute dysphagia were developed using pharyngeal mucosa (PM) RT dose (dose-volume and spatial dose metrics) and clinical data. Penalized logistic regression (PLR), support vector classification and random forest classification (RFC) models were generated and internally (173 patients) and externally (90 patients) validated. These were compared using area under the receiver operating characteristic curve (AUC) to assess performance. Associations between treatment features and dysphagia were explored using RFC models. The PLR model using dose-volume metrics (PLR standard ) performed as well as the more complex models and had very good discrimination (AUC = 0.82) on external validation. The features with the highest RFC importance values were the volume, length and circumference of PM receiving 1 Gy/fraction and higher. The volumes of PM receiving 1 Gy/fraction or higher should be minimized to reduce the incidence of severe acute dysphagia.

  17. Incorporating spatial dose metrics in machine learning-based normal tissue complication probability (NTCP models of severe acute dysphagia resulting from head and neck radiotherapy

    Directory of Open Access Journals (Sweden)

    Jamie Dean

    2018-01-01

    Full Text Available Severe acute dysphagia commonly results from head and neck radiotherapy (RT. A model enabling prediction of severity of acute dysphagia for individual patients could guide clinical decision-making. Statistical associations between RT dose distributions and dysphagia could inform RT planning protocols aiming to reduce the incidence of severe dysphagia. We aimed to establish such a model and associations incorporating spatial dose metrics. Models of severe acute dysphagia were developed using pharyngeal mucosa (PM RT dose (dose-volume and spatial dose metrics and clinical data. Penalized logistic regression (PLR, support vector classification and random forest classification (RFC models were generated and internally (173 patients and externally (90 patients validated. These were compared using area under the receiver operating characteristic curve (AUC to assess performance. Associations between treatment features and dysphagia were explored using RFC models. The PLR model using dose-volume metrics (PLRstandard performed as well as the more complex models and had very good discrimination (AUC = 0.82 on external validation. The features with the highest RFC importance values were the volume, length and circumference of PM receiving 1 Gy/fraction and higher. The volumes of PM receiving 1 Gy/fraction or higher should be minimized to reduce the incidence of severe acute dysphagia.

  18. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  19. Air-Sea Exchange of Legacy POPs in the North Sea Based on Results of Fate and Transport, and Shelf-Sea Hydrodynamic Ocean Models

    Directory of Open Access Journals (Sweden)

    Kieran O'Driscoll

    2014-04-01

    Full Text Available The air-sea exchange of two legacy persistent organic pollutants (POPs, γ-HCH and PCB 153, in the North Sea, is presented and discussed using results of regional fate and transport and shelf-sea hydrodynamic ocean models for the period 1996–2005. Air-sea exchange occurs through gas exchange (deposition and volatilization, wet deposition and dry deposition. Atmospheric concentrations are interpolated into the model domain from results of the EMEP MSC-East multi-compartmental model (Gusev et al, 2009. The North Sea is net depositional for γ-HCH, and is dominated by gas deposition with notable seasonal variability and a downward trend over the 10 year period. Volatilization rates of γ-HCH are generally a factor of 2–3 less than gas deposition in winter, spring and summer but greater in autumn when the North Sea is net volatilizational. A downward trend in fugacity ratios is found, since gas deposition is decreasing faster than volatilization. The North Sea is net volatilizational for PCB 153, with highest rates of volatilization to deposition found in the areas surrounding polluted British and continental river sources. Large quantities of PCB 153 entering through rivers lead to very high local rates of volatilization.

  20. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  1. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  2. Results-based Rewards - Leveraging Wage Increases?

    DEFF Research Database (Denmark)

    Bregn, Kirsten

    2005-01-01

    A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages with a g......A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages...

  3. Some results regarding the comparison of the Earth's atmospheric models

    Directory of Open Access Journals (Sweden)

    Šegan S.

    2005-01-01

    Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.

  4. Blood gas sample spiking with total parenteral nutrition, lipid emulsion, and concentrated dextrose solutions as a model for predicting sample contamination based on glucose result.

    Science.gov (United States)

    Jara-Aguirre, Jose C; Smeets, Steven W; Wockenfus, Amy M; Karon, Brad S

    2018-03-16

    Evaluate the effects of blood gas sample contamination with total parenteral nutrition (TPN)/lipid emulsion and dextrose 50% (D50) solutions on blood gas and electrolyte measurement; and determine whether glucose concentration can predict blood gas sample contamination with TPN/lipid emulsion or D50. Residual lithium heparin arterial blood gas samples were spiked with TPN/lipid emulsion (0 to 15%) and D50 solutions (0 to 2.5%). Blood gas (pH, pCO2, pO2), electrolytes (Na+, K+ ionized calcium) and hemoglobin were measured with a Radiometer ABL90. Glucose concentration was measured in separated plasma by Roche Cobas c501. Chart review of neonatal blood gas results with glucose >300 mg/dL (>16.65 mmol/L) over a seven month period was performed to determine whether repeat (within 4 h) blood gas results suggested pre-analytical errors in blood gas results. Results were used to determine whether a glucose threshold could predict contamination resulting in blood gas and electrolyte results with greater than laboratory-defined allowable error. Samples spiked with 5% or more TPN/lipid emulsion solution or 1% D50 showed glucose concentration >500 mg/dL (>27.75 mmol/L) and produced blood gas (pH, pO 2 , pCO 2 ) results with greater than laboratory-defined allowable error. TPN/lipid emulsion, but not D50, produced greater than allowable error in electrolyte (Na + ,K + ,Ca ++ ,Hb) results at these concentrations. Based on chart review of 144 neonatal blood gas results with glucose >250 mg/dL received over seven months, four of ten neonatal intensive care unit (NICU) patients with glucose results >500 mg/dL and repeat blood gas results within 4 h had results highly suggestive of pre-analytical error. Only 3 of 36 NICU patients with glucose results 300-500 mg/dL and repeat blood gas results within 4 h had clear pre-analytical errors in blood gas results. Glucose concentration can be used as an indicator of significant blood sample contamination with either TPN

  5. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode....

  6. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  7. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  8. Model for the accumulation of solar wind radiation damage effects in lunar dust grains, based on recent results concerning implantation and erosion effects

    Energy Technology Data Exchange (ETDEWEB)

    Borg, J.; Bibring, J.P.; Cowsik, G.; Langevin, Y.; Maurette, M.

    1983-02-15

    In this paper we present our most recent results on ion implantation and erosion effects, intended to reproduce the superficial amorphous layers of radiation damage observed with a high voltage electron microscope on ..mu..m-sized grains extracted from the lunar regolith and which result from the exposure of the grains to the solar wind. We next outline theoretical computations which yield the thickness distribution of such amorphous layers as a function of the exposure time of the grains at the surface of the moon, the He/H ratio, and the speed distribution in the solar wind. From this model, the position of the peak in the solar wind speed distribution is the major parameter controlling the thickness of the amorphous layer.

  9. Evaluation of a comprehensive EHR based on the DeLone and McLean model for IS Success: Approach, Results, and Success Factors

    DEFF Research Database (Denmark)

    Bossen, Claus; Jensen, Lotte Groth; Udsen, Flemming Witt

    2013-01-01

    Objective: The article describes the methodological approach to, and results of an evaluation of a comprehensive electronic health record (EHR) in the shake down phase, shortly after its implementation at a regional hospital in Denmark. Design: A formative evaluation based on a mixed-methods case...... study, designed to be interactive and concurrent was conducted at two hospital departments based on the updated DeLone and McLean framework for evaluating information systems success. Methods: To ascertain user assessments of the EHR, we distributed a questionnaire two months after implementation......:Overall, staff had positive experiences with the EHR and its operational reliability, response time, login and support. Performance was acceptable. Medical secretaries found the use of the patient administration module cumbersome, and physicians found the establishment of the overview of professionally...

  10. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  11. Three dimensional model-based analysis of the lenticulostriate arteries and identification of the vessels correlated to the infarct area: preliminary results.

    Science.gov (United States)

    Kang, Chang-Ki; Wörz, Stefan; Liao, Wei; Park, Chan-A; Kim, Young-Bo; Park, Cheol-Wan; Lee, Young-Bae; Rohr, Karl; Cho, Zang-Hee

    2012-10-01

    Small vessel diseases have been studied noninvasively with magnetic resonance imaging. Direct observation or visualization of the connected microvessel to the infarct, however, was not possible due to the limited resolution. Hence, one could not determine whether vessel occlusion or abnormal narrowing is the cause of an infarct. In this report, we demonstrate that the small vessel related to the infarct can be detected using ultra-high-field (7 T) magnetic resonance imaging and a three dimensional image analysis and modeling technique for microvessels, which thereby enables us to quantify the vessel morphology directly, that is, visualize the vessel that is related to the infarct. We compared vessels of selected stroke patients, who had recovered from stroke, with vessels from typical stroke patients, who had after effects like motor weakness, and age-matched healthy subjects to demonstrate the potential of the technique. The experimental results show that typical stroke patients had overall degradation or loss of small vessels, compared with the selected patients as well as healthy subjects. The selected patients, however, had only minimal loss of vessels, except for one vessel located close to the infarct area. These preliminary results demonstrated that 7 T magnetic resonance imaging together with a three dimensional image analysis and modeling technique could provide information for detection of the vessel related to the infarct. In addition, three dimensional image analysis and modeling of vessels could further provide quantitative information on the microvessel structures comprising diameter, length and tortuosity. © 2011 The Authors. International Journal of Stroke © 2011 World Stroke Organization.

  12. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  13. [The PROPRESE trial: results of a new health care organizational model in primary care for patients with chronic coronary heart disease based on a multifactorial intervention].

    Science.gov (United States)

    Ruescas-Escolano, Esther; Orozco-Beltran, Domingo; Gaubert-Tortosa, María; Navarro-Palazón, Ana; Cordero-Fort, Alberto; Navarro-Pérez, Jorge; Carratalá-Munuera, Concepción; Pertusa-Martínez, Salvador; Soler-Bahilo, Enrique; Brotons-Muntó, Francisco; Bort-Cubero, Jose; Núñez-Martínez, Miguel A; Bertomeu-Martínez, Vicente; López-Pineda, Adriana; Gil-Guillén, Vicente F

    2014-06-01

    Comparison of the results from the EUROASPIRE I to the EUROASPIRE III, in patients with coronary heart disease, shows that the prevalence of uncontrolled risk factors remains high. The aim of the study was to evaluate the effectiveness of a new multifactorial intervention in order to improve health care for chronic coronary heart disease patients in primary care. In this randomized clinical trial with a 1-year follow-up period, we recruited patients with a diagnosis of coronary heart disease (145 for the intervention group and 1461 for the control group). An organizational intervention on the patient-professional relationship (centered on the Chronic Care Model, the Stanford Expert Patient Programme and the Kaiser Permanente model) and formative strategy for professionals were carried out. The main outcomes were smoking control, low-density lipoprotein cholesterol (LDL-C), systolic blood pressure (SBP) and diastolic blood pressure (DBP). A multivariate analysis was performed. The characteristics of patients were: age (68.4±11.8 years), male (71.6%), having diabetes mellitus (51.3%), dyslipidemia (68.5%), arterial hypertension (76.7%), non-smokers (76.1%); LDL-C chronic patients focused in primary care and involving patients in medical decision making improves cardiovascular risk factors control (smoking, LDL-C and SBP). Chronic care strategies may be an efficacy tool to help clinicians to involve the patients with a diagnosis of CHD to reach better outcomes. Copyright © 2014 Elsevier España, S.L. All rights reserved.

  14. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  15. A Proficiency-Based Progression Training Curriculum Coupled With a Model Simulator Results in the Acquisition of a Superior Arthroscopic Bankart Skill Set.

    Science.gov (United States)

    Angelo, Richard L; Ryu, Richard K N; Pedowitz, Robert A; Beach, William; Burns, Joseph; Dodds, Julie; Field, Larry; Getelman, Mark; Hobgood, Rhett; McIntyre, Louis; Gallagher, Anthony G

    2015-10-01

    To determine the effectiveness of proficiency-based progression (PBP) training using simulation both compared with the same training without proficiency requirements and compared with a traditional resident course for learning to perform an arthroscopic Bankart repair (ABR). In a prospective, randomized, blinded study, 44 postgraduate year 4 or 5 orthopaedic residents from 21 Accreditation Council for Graduate Medical Education-approved US orthopaedic residency programs were randomly assigned to 1 of 3 skills training protocols for learning to perform an ABR: group A, traditional (routine Arthroscopy Association of North America Resident Course) (control, n = 14); group B, simulator (modified curriculum adding a shoulder model simulator) (n = 14); or group C, PBP (PBP plus the simulator) (n = 16). At the completion of training, all subjects performed a 3 suture anchor ABR on a cadaveric shoulder, which was videotaped and scored in blinded fashion with the use of previously validated metrics. The PBP-trained group (group C) made 56% fewer objectively assessed errors than the traditionally trained group (group A) (P = .011) and 41% fewer than group B (P = .049) (both comparisons were statistically significant). The proficiency benchmark was achieved on the final repair by 68.7% of participants in group C compared with 36.7% in group B and 28.6% in group A. When compared with group A, group B participants were 1.4 times, group C participants were 5.5 times, and group C(PBP) participants (who met all intermediate proficiency benchmarks) were 7.5 times as likely to achieve the final proficiency benchmark. A PBP training curriculum and protocol coupled with the use of a shoulder model simulator and previously validated metrics produces a superior arthroscopic Bankart skill set when compared with traditional and simulator-enhanced training methods. Surgical training combining PBP and a simulator is efficient and effective. Patient safety could be improved if

  16. Employment Effects of Renewable Energy Expansion on a Regional Level—First Results of a Model-Based Approach for Germany

    Directory of Open Access Journals (Sweden)

    Ulrike Lehr

    2012-02-01

    Full Text Available National studies have shown that both gross and net effects of the expansion of energy from renewable sources on employment are positive for Germany. These modeling approaches also revealed that this holds true for both present and future perspectives under certain assumptions on the development of exports, fossil fuel prices and national politics. Yet how are employment effects distributed within Germany? What components contribute to growth impacts on a regional level? To answer these questions (new methods of regionalization were explored and developed for the example “wind energy onshore” for Germany’s federal states. The main goal was to develop a methodology which is applicable to all renewable energy technologies in future research. For the quantification and projection, it was necessary to distinguish between jobs generated by domestic investments and exports on the one hand, and jobs for operation and maintenance of existing plants on the other hand. Further, direct and indirect employment is analyzed. The results show, that gross employment is particularly high in the northwestern regions of Germany. However, especially the indirect effects are spread out over the whole country. Regions in the south not only profit from the delivery of specific components, but also from other industry and service inputs.

  17. Analysis of Current and Future SPEI Droughts in the La Plata Basin Based on Results from the Regional Eta Climate Model

    Directory of Open Access Journals (Sweden)

    Alvaro Sordo-Ward

    2017-11-01

    Full Text Available We identified and analysed droughts in the La Plata Basin (divided into seven sub-basins for the current period (1961–2005 and estimated their expected evolution under future climate projections for the periods 2011–2040, 2041–2070, and 2071–2099. Future climate projections were analysed from results of the Eta Regional Climate Model (grid resolution of approximately 10 km forced by the global climate model HadGEM2-ES over the La Plata basin, and considering a RCP4.5 emission scenario. Within each sub-basin, we particularly focused our drought analyses on croplands and grasslands, due to their economic relevance. The three-month Standardized Precipitation Evapotranspiration Index (SPEI3 was used for drought identification and characterization. Droughts were evaluated in terms of time (percentage of time from the total length of each climate scenario, space (percentage of total area, and severity (SPEI3 values of cells characterized by cropland and grassland for each sub-basin and climate scenario. Drought-severity–area–frequency curves were developed to quantitatively relate the frequency distribution of drought occurrence to drought severity and area. For the period 2011–2040, droughts dominate the northern sub-basins, whereas alternating wet and short dry periods dominate the southern sub-basins. Wet climate spread from south to north within the La Plata Basin as more distant future scenarios were analysed, due to both a greater number of wet periods and fewer droughts. The area of each sub-basin affected by drought in all climate scenarios was highly varied temporally and spatially. The likelihood of the occurrence of droughts differed significantly between the studied cover types in the Lower Paraguay sub-basin, being higher for cropland than for grassland. Mainly in the Upper Paraguay and in the Upper Paraná basins the climate projections for all scenarios showed an increase of moderate and severe droughts over large regions

  18. Modeling dry and wet deposition of sulfate, nitrate, and ammonium ions in Jiuzhaigou National Nature Reserve, China using a source-oriented CMAQ model: Part I. Base case model results.

    Science.gov (United States)

    Qiao, Xue; Tang, Ya; Hu, Jianlin; Zhang, Shuai; Li, Jingyi; Kota, Sri Harsha; Wu, Li; Gao, Huilin; Zhang, Hongliang; Ying, Qi

    2015-11-01

    A source-oriented Community Multiscale Air Quality (CMAQ) model driven by the meteorological fields generated by the Weather Research and Forecasting (WRF) model was used to study the dry and wet deposition of nitrate (NO3(-)), sulfate (SO4(2-)), and ammonium (NH4(+)) ions in the Jiuzhaigou National Nature Reserve (JNNR), China from June to August 2010 and to identify the contributions of different emission sectors and source regions that were responsible for the deposition fluxes. The model performance is evaluated in this paper and the source contribution analyses are presented in a companion paper. The results show that WRF is capable of reproducing the observed precipitation rates with a Mean Normalized Gross Error (MNGE) of 8.1%. Predicted wet deposition fluxes of SO4(2-) and NO3(-) at the Long Lake (LL) site (3100 m a.s.l.) during the three-month episode are 2.75 and 0.34 kg S(N) ha(-1), which agree well with the observed wet deposition fluxes of 2.42 and 0.39 kg S(N) ha(-1), respectively. Temporal variations in the weekly deposition fluxes at LL are also well predicted. Wet deposition flux of NH4(+) at LL is over-predicted by approximately a factor of 3 (1.60 kg N ha(-1)vs. 0.56 kg N ha(-1)), likely due to missing alkaline earth cations such as Ca(2+) in the current CMAQ simulations. Predicted wet deposition fluxes are also in general agreement with observations at four Acid Deposition Monitoring Network in East Asia (EANET) sites in western China. Predicted dry deposition fluxes of SO4(2-) (including gas deposition of SO2) and NO3(-) (including gas deposition of HNO3) are 0.12 and 0.12 kg S(N) h a(-1) at LL and 0.07 and 0.08 kg S(N) ha(-1) at Jiuzhaigou Bureau (JB) in JNNR, respectively, which are much lower than the corresponding wet deposition fluxes. Dry deposition flux of NH4(+) (including gas deposition of NH3) is 0.21 kg N ha(-1) at LL, and is also much lower than the predicted wet deposition flux. For both dry and wet deposition fluxes, predictions

  19. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    , the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...... function types and it is indicated that the spline-function provides a better description of the data. Results are also provided in the form of a back-casting exercise where the model is tested in a back-casting scenario to 2002....

  20. Slice-based supine to standing postured deformation for chinese anatomical models and the dosimetric results by wide band frequency electromagnetic field exposure: Morphing

    International Nuclear Information System (INIS)

    Wu, T.; Tan, L.; Shao, Q.; Li, Y.; Yang, L.; Zhao, C.; Xie, Y.; Zhang, S.

    2013-01-01

    Digital human models are frequently obtained from supine-postured medical images or cadaver slices, but many applications require standing models. This paper presents the work of reconstructing standing Chinese adult anatomical models from supine postured slices. Apart from the previous studies, the deformation works on 2-D segmented slices. The surface profile of the standing posture is adjusted by population measurement data. A non-uniform texture amplification approach is applied on the 2-D slices to recover the skin contour and to redistribute the internal tissues. Internal organ shift due to postures is taken into account. The feet are modified by matrix rotation. Then, the supine and standing models are utilised for the evaluation of electromagnetic field exposure over wide band frequency and different incident directions. . (authors)

  1. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Science.gov (United States)

    Everaars, Jeroen; Settele, Josef; Dormann, Carsten F

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  2. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model

    Science.gov (United States)

    Settele, Josef; Dormann, Carsten F.

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  3. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Directory of Open Access Journals (Sweden)

    Jeroen Everaars

    Full Text Available Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness, number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study, for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1 Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2 Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3 Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response

  4. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  5. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  6. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  7. Calculation of limits for significant unidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2015-01-01

    BACKGROUND: Reference change values (RCVs) were introduced more than 30 years ago and provide objective tools for assessment of the significance of differences in two consecutive results from an individual. However, in practice, more results are usually available for monitoring, and using the RCV...... the presented factors. The first result is multiplied by the appropriate factor for increase or decrease, which gives the limits for a significant difference.......BACKGROUND: Reference change values (RCVs) were introduced more than 30 years ago and provide objective tools for assessment of the significance of differences in two consecutive results from an individual. However, in practice, more results are usually available for monitoring, and using the RCV......,000 simulated data from healthy individuals, a series of up to 20 results from an individual was generated using different values for the within-subject biological variation plus the analytical variation. Each new result in this series was compared to the initial measurement result. These successive serial...

  8. DNA computing based on splicing: universality results.

    Science.gov (United States)

    Csuhaj-Varjú, E; Freund, R; Kari, L; Păun, G

    1996-01-01

    The paper extends some of the most recently obtained results on the computational universality of specific variants of H systems (e.g. with regular sets of rules) and proves that we can construct universal computers based on various types of H systems with a finite set of splicing rules as well as a finite set of axioms, i.e. we show the theoretical possibility to design programmable universal DNA computers based on the splicing operation. For H systems working in the multiset style (where the numbers of copies of all available strings are counted) we elaborate how a Turing machine computing a partial recursive function can be simulated by an equivalent H system computing the same function; in that way, from a universal Turning machine we obtain a universal H system. Considering H systems as language generating devices we have to add various simple control mechanisms (checking the presence/absence of certain symbols in the spliced strings) to systems with a finite set of splicing rules as well as with a finite set of axioms in order to obtain the full computational power, i.e. to get a characterization of the family of recursively enumerable languages. We also introduce test tube systems, where several H systems work in parallel in their tubes and from time to time the contents of each tube are redistributed to all tubes according to certain separation conditions. By the construction of universal test tube systems we show that also such systems could serve as the theoretical basis for the development of biological (DNA) computers.

  9. Implementation of Problem Based Learning Model in Concept Learning Mushroom as a Result of Student Learning Improvement Efforts Guidelines for Teachers

    Science.gov (United States)

    Rubiah, Musriadi

    2016-01-01

    Problem based learning is a training strategy, students work together in groups, and take responsibility for solving problems in a professional manner. Instructional materials such as textbooks become the main reference of students in study of mushrooms, especially the material is considered less effective in responding to the information needs of…

  10. Slice-based supine-to-standing posture deformation for chinese anatomical models and the dosimetric results with wide band frequency electromagnetic field exposure: Simulation

    International Nuclear Information System (INIS)

    Wu, T.; Tan, L.; Shao, Q.; Li, Y.; Yang, L.; Zhao, C.; Xie, Y.; Zhang, S.

    2013-01-01

    Standing Chinese adult anatomical models are obtained from supine-postured cadaver slices. This paper presents the dosimetric differences between the supine and the standing postures over wide band frequencies and various incident configurations. Both the body level and the tissue/organ level differences are reported for plane wave and the 3T magnetic resonance imaging radiofrequency electromagnetic field exposure. The influence of posture on the whole body specific absorption rate and tissue specified specific absorption rate values is discussed. . (authors)

  11. Teaching-based research: Models of and experiences with students doing research and inquiry – results from a university-wide initiative in a research-intensive environment

    DEFF Research Database (Denmark)

    Rump, Camilla Østerberg; Damsholt, Tine; Sandberg, Marie

    meat two to three times a year. The over-arching purpose of the project was to integrate research and teaching in order to qualify the students and their academic skills by organizing lessons in ways which introduces the students to the key research methods and processes of the subject. Several......Overall abstract The purpose of this symposium is to explore and compare a multitude of different approaches to implementing research based teaching in a specific institutional setting. The four case studies are characterized by including teaching based research, see e.g. Wilcoxon et al., 2011......,000 students. Following a 4 year’s strategy with a strong emphasis on research, pressure from scientific staff led to a 2012-2016 strategy with teaching as main focus of enhancement at the University. In a process of application by individual or groups of teachers, 8 thematic projects were initiated...

  12. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  13. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  14. Experimental Results and Model Calculations of a Hybrid Adsorption-Compression Heat Pump Based on a Roots Compressor and Silica Gel-Water Sorption

    Energy Technology Data Exchange (ETDEWEB)

    Van der Pal, M.; De Boer, R.; Wemmers, A.K.; Smeding, S.F.; Veldhuis, J.B.J.; Lycklama a Nijeholt, J.A.

    2013-10-15

    Thermally driven sorption systems can provide significant energy savings, especially in industrial applications. The driving temperature for operation of such systems limits the operating window and can be a barrier for market-introduction. By adding a compressor, the sorption cycle can be run using lower waste heat temperatures. ECN has recently started the development of such a hybrid heat pump. The final goal is to develop a hybrid heat pump for upgrading lower (<100C) temperature industrial waste heat to above pinch temperatures. The paper presents the first measurements and model calculations of a hybrid heat pump system using a water-silica gel system combined with a Roots type compressor. From the measurements can be seen that the effect of the compressor is dependent on where in the cycle it is placed. When placed between the evaporator and the sorption reactor, it has a considerable larger effect compared to the compressor placed between the sorption reactor and the condenser. The latter hardly improves the performance compared to purely heat-driven operation. This shows the importance of studying the interaction between all components of the system. The model, which shows reasonable correlation with the measurements, could proof to be a valuable tool to determine the optimal hybrid heat pump configuration.

  15. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  16. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  17. Modeling and Field Results from Seismic Stimulation

    International Nuclear Information System (INIS)

    Majer, E.; Pride, S.; Lo, W.; Daley, T.; Nakagawa, Seiji; Sposito, Garrison; Roberts, P.

    2006-01-01

    Modeling the effect of seismic stimulation employing Maxwell-Boltzmann theory shows that the important component of stimulation is mechanical rather than fluid pressure effects. Modeling using Biot theory (two phases) shows that the pressure effects diffuse too quickly to be of practical significance. Field data from actual stimulation will be shown to compare to theory

  18. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  19. Izbor optimalnog puta za kretanje organizovanog kolonskog saobraćajnog toka na osnovu rezultata modeliranja / Choosing an optimal route for organized vehicle movement based on modeling results

    Directory of Open Access Journals (Sweden)

    Radomir S. Gordić

    2006-04-01

    Full Text Available U toku planiranja i praktične realizacije zadataka jedinica Vojske SCG često se javlja problem izbora optimalnog puta između dva mesta (čvora na putnoj mreži. Kriterijumi optimizacije mogu biti različiti. Ovaj projekat treba da omogući brzo i lako određivanje optimalnog puta, primenom dinamičkog programiranja (DP, uz korišćenje Belmanovog (Bellman, algoritma u zavisnosti od izabranog kriterijuma -parametra. Kriterijum optimizacije je minimalno vreme kretanja (putovanja, koje je dobijeno imitacionim modeliranjem kolonskog saobraćajnog toka. Razrađeni algoritam omogućuje izbor optimalnog puta, za bilo koja dva čvora na mreži. / During the planning and practical realization of Serbian & Montenegro units' tasks a problem -which often occurs is choosing an optimal transport route between two places (nodes. Optimization criteria can be various. This project should enable quick and easy defining of an optimal route, applying dynamic programing (DP using Bellman's algorithm depending on chosen criteria - parameter. Optimization criteria represent minimum movement time (traveling, which are taken from imitational modeling of a traffics queue flow. Operating algorithm enable choosing an optimal transport route, for any two nodes on a road map.

  20. KIR alloreactivity based on the receptor-ligand model is associated with improved clinical outcomes of allogeneic hematopoietic stem cell transplantation: Result of single center prospective study.

    Science.gov (United States)

    Park, Silvia; Kim, Kihyun; Jang, Jun Ho; Kim, Seok Jin; Kim, Won Seog; Kang, Eun-Suk; Jung, Chul Won

    2015-09-01

    Receptors on natural killer (NK) cells, named killer immunoglobulin-like receptors (KIRs), recognize HLA class I alleles. Patients (n=59) who received allogeneic hematopoietic stem cell transplantation (HSCT) from either a related (n=17) or unrelated donor (n=42) in Samsung Medical Center (Seoul, South Korea) were included. KIR mismatch was defined as incompatibility between the donor KIR and recipient KIR ligand (receptor-ligand model), and all cases were classified into the two broad haplotypes of KIR A and B. Patients with acute leukemia (n=51, 86.4%) or myelodysplastic syndrome (n=8, 13.6%) were included. Peripheral blood was used as the source of stem cells in all patients. Kaplan-Meier survival curves for overall survival (OS), disease-free survival (DFS), and cumulative incidence of relapse (CIR) favored recipients with a KIR-mismatched donor, although the differences were not statistically significant. In multivariate analysis, KIR mismatch was an independent prognostic indicator of a better OS (P=0.010, HR=0.148, 95% CI 0.034-0.639), DFS (P=0.022, HR=0.237, 95% CI 0.069-0.815), and CIR (P=0.031, HR=0.117, 95% CI 0.017-0.823). OS, DFS, and CIR did not differ significantly between the KIR A and B haplotypes. Copyright © 2015. Published by Elsevier Inc.

  1. The 2013 European Seismic Hazard Model: key components and results

    OpenAIRE

    Jochen Woessner; Danciu Laurentiu; Domenico Giardini; Helen Crowley; Fabrice Cotton; G. Grünthal; Gianluca Valensise; Ronald Arvidsson; Roberto Basili; Mine Betül Demircioglu; Stefan Hiemer; Carlo Meletti; Roger W. Musson; Andrea N. Rovida; Karin Sesetyan

    2015-01-01

    The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project “Seismic Hazard Harmonization in Europe” (SHARE, 2009–2013). The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the “Global Earthquake Model” initiative. It m...

  2. LITHOSPHERIC STRUCTURE OF THE CARPATHIAN-PANNONIAN REGION BASED ON THE GRAVITY MODELING BY INTEGRATING THE CELEBRATION2000 SEISMIC EXPERIMENT AND NEW GEOPHYSICAL RESULTS

    Science.gov (United States)

    Bielik, M.; Alasonati Tašárová, Z.; Zeyen, H. J.; Afonso, J.; Goetze, H.; Dérerová, J.

    2009-12-01

    Two different methods for the 3-D interpretation of the gravity field have been applied to the study of the structure and tectonics of the Carpathian-Pannonian lithosphere. The first (second) method provided a set of the different stripped gravity maps (the new lithosphere thickness map). The contribution presents the interpretation of the gravity field, which takes into account the CELEBRATION2000 seismic as well as new geophysical results. The sediment stripped gravity map is characterized by gravity minima in the Eastern Alps and Western Carpathians, and gravity maxima in the Pannonian Back-arc Basin system and the European platform. The gravity low in the Eastern Alps is produced by the thick crust (more than 45 km). The Western Carpathian gravity minimum is a result of the interference of two main gravitational effects. The first one comes from the low-density sediments of the Outer Western Carpathians and Carpathian Foredeep. The second one is due to the thick low-density upper and middle crust, reaching up to 25 km. In the Pannonian Back-arc Basin system can be observed the regional gravity high which is a result of the gravity effect of the anomalously shallow Moho. The most dominant feature of the complete 3-D stripped gravity map (crustal gravity effect map) is the abrupt change of the gravity field along the Klippen Belt zone. While the European platform is characterized by positive anomalies, the Western Carpathian orogen and the Pannonian Back-arc Basin system by relatively long-wavelength gravity low (several hundred kilometers). The lowest values are associated with the thick low-density upper and middle crust of the Inner Western Carpathians. That is why we suggest that the European Platform consists of the significantly denser crust with respect to the less dense crust of the microplates ALCAPA and Tisza-Dacia. The contrast in the gravity fields over the European platform and microplates ALCAPA and Tisza-Dacia reflect also their different crustal

  3. Coupled Michigan MHD - Rice Convection Model Results

    Science.gov (United States)

    de Zeeuw, D.; Sazykin, S.; Wolf, D.; Gombosi, T.; Powell, K.

    2002-12-01

    A new high performance Rice Convection Model (RCM) has been coupled to the adaptive-grid Michigan MHD model (BATSRUS). This fully coupled code allows us to self-consistently simulate the physics in the inner and middle magnetosphere. A study will be presented of the basic characteristics of the inner and middle magnetosphere in the context of a single coupled-code run for idealized storm inputs. The analysis will include region-2 currents, shielding of the inner magnetosphere, partial ring currents, pressure distribution, magnetic field inflation, and distribution of pV^gamma.

  4. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  5. Graphical interpretation of numerical model results

    International Nuclear Information System (INIS)

    Drewes, D.R.

    1979-01-01

    Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements

  6. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  7. Atlas-based functional radiosurgery: Early results

    International Nuclear Information System (INIS)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N.

    2009-01-01

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  8. Ignalina NPP Safety Analysis: Models and Results

    International Nuclear Information System (INIS)

    Uspuras, E.

    1999-01-01

    Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)

  9. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  10. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  11. Results of a model for premixed combustion oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Janus, M.C.; Richards, G.A.

    1996-09-01

    Combustion oscillations are receiving renewed research interest due to increasing use of lean premix (LPM) combustion to gas turbines. A simple, nonlinear model for premixed combustion is described in this paper. The model was developed to help explain specific experimental observations and to provide guidance for development of active control schemes based on nonlinear concepts. The model can be used to quickly examine instability trends associated with changes in equivalence ratio, mass flow rate, geometry, ambient conditions, etc. The model represents the relevant processes occurring in a fuel nozzle and combustor which are analogous to current LPM turbine combustors. Conservation equations for the fuel nozzle and combustor are developed from simple control volume analysis, providing a set of ordinary differential equations that can be solved on a personal computer. Combustion is modeled as a stirred reactor, with a bimolecular reaction rate between fuel and air. A variety of numerical results and comparisons to experimental data are presented to demonstrate the utility of the model. Model results are used to understand the fundamental mechanisms which drive combustion oscillations, effects of inlet air temperature and nozzle geometry on instability, and effectiveness of open loop control schemes.

  12. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  13. Insulator-based dielectrophoresis of microorganisms: theoretical and experimental results.

    Science.gov (United States)

    Moncada-Hernandez, Hector; Baylon-Cardiel, Javier L; Pérez-González, Victor H; Lapizco-Encinas, Blanca H

    2011-09-01

    Dielectrophoresis (DEP) is the motion of particles due to polarization effects in nonuniform electric fields. DEP has great potential for handling cells and is a non-destructive phenomenon. It has been utilized for different cell analysis, from viability assessments to concentration enrichment and separation. Insulator-based DEP (iDEP) provides an attractive alternative to conventional electrode-based systems; in iDEP, insulating structures are used to generate nonuniform electric fields, resulting in simpler and more robust devices. Despite the rapid development of iDEP microdevices for applications with cells, the fundamentals behind the dielectrophoretic behavior of cells has not been fully elucidated. Understanding the theory behind iDEP is necessary to continue the progress in this field. This work presents the manipulation and separation of bacterial and yeast cells with iDEP. A computational model in COMSOL Multiphysics was employed to predict the effect of direct current-iDEP on cells suspended in a microchannel containing an array of insulating structures. The model allowed predicting particle behavior, pathlines and the regions where dielectrophoretic immobilization should occur. Experimental work was performed at the same operating conditions employed with the model and results were compared, obtaining good agreement. This is the first report on the mathematical modeling of the dielectrophoretic response of yeast and bacterial cells in a DC-iDEP microdevice. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  15. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  16. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  17. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  18. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...

  19. U.S. electric power sector transitions required to achieve 80% reductions in economy-wide greenhouse gas emissions: Results based on a state-level model of the U.S. energy system

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.

    2017-05-01

    The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.

  20. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  1. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...

  2. Model-based biosignal interpretation.

    Science.gov (United States)

    Andreassen, S

    1994-03-01

    Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.

  3. gis-based hydrological model based hydrological model upstream

    African Journals Online (AJOL)

    eobe

    its effectiveness in terms of data representation quality of modeling results, hydrological models usually embedded in Geographical Information. (GIS) environment to simulate various parame attributed to a selected catchment. complex technology highly suitable for spatial temporal data analyses and information extractio.

  4. Model-based Abstraction of Data Provenance

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  5. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  6. Verification of aseismic design model by using experimental results

    International Nuclear Information System (INIS)

    Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.

    1985-01-01

    A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)

  7. Analysis of inelastic neutron scattering results on model compounds ...

    Indian Academy of Sciences (India)

    J Tomkinson heterobicyclic molecules could form a reasonable base of model compounds to un- derstand the eigenvectors of one interesting molecular system; the nitrogenous het- erocyclic bases of the nucleotides. Low energy molecular vibrational eigenvectors involve atomic displacements over the molecule as a whole ...

  8. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...... authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron data set, while 89.5% accuracy has been achieved on authors' constructed real email data set. The results on Enron data set have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1...

  9. Computer-Based Modeling Environments

    Science.gov (United States)

    1989-01-01

    1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as

  10. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  11. Didactic Strategy Discussion Based on Artificial Neural Networks Results.

    Science.gov (United States)

    Andina, D.; Bermúdez-Valbuena, R.

    2009-04-01

    Artificial Neural Networks (ANNs) are a mathematical model of the main known characteristics of biological brian dynamics. ANNs inspired in biological reality have been useful to design machines that show some human-like behaviours. Based on them, many experimentes have been succesfully developed emulating several biologial neurons characteristics, as learning how to solve a given problem. Sometimes, experimentes on ANNs feedback to biology and allow advances in understanding the biological brian behaviour, allowing the proposal of new therapies for medical problems involving neurons performing. Following this line, the author present results on artificial learning on ANN, and interpret them aiming to reinforce one of this two didactic estrategies to learn how to solve a given difficult task: a) To train with clear, simple, representative examples and feel confidence in brian generalization capabilities to achieve succes in more complicated cases. b) To teach with a set of difficult cases of the problem feeling confidence that the brian will efficiently solve the rest of cases if it is able to solve the difficult ones. Results may contribute in the discussion of how to orientate the design innovative succesful teaching strategies in the education field.

  12. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  13. Results on the symmetries of integrable fermionic models on chains

    International Nuclear Information System (INIS)

    Dolcini, F.; Montorsi, A.

    2001-01-01

    We investigate integrable fermionic models within the scheme of the graded quantum inverse scattering method, and prove that any symmetry imposed on the solution of the Yang-Baxter equation reflects on the constants of motion of the model; generalizations with respect to known results are discussed. This theorem is shown to be very effective when combined with the polynomial R-matrix technique (PRT): we apply both of them to the study of the extended Hubbard models, for which we find all the subcases enjoying several kinds of (super)symmetries. In particular, we derive a geometrical construction expressing any gl(2,1)-invariant model as a linear combination of EKS and U-supersymmetric models. Further, we use the PRT to obtain 32 integrable so(4)-invariant models. By joint use of the Sutherland's species technique and η-pairs construction we propose a general method to derive their physical features, and we provide some explicit results

  14. Influence of Hydraulic Design on Stability and on Pressure Pulsations in Francis Turbines at Overload, Part Load and Deep Part Load based on Numerical Simulations and Experimental Model Test Results

    International Nuclear Information System (INIS)

    Magnoli, M V; Maiwald, M

    2014-01-01

    Francis turbines have been running more and more frequently in part load conditions, in order to satisfy the new market requirements for more dynamic and flexible energy generation, ancillary services and grid regulation. The turbines should be able to be operated for longer durations with flows below the optimum point, going from part load to deep part load and even speed-no-load. These operating conditions are characterised by important unsteady flow phenomena taking place at the draft tube cone and in the runner channels, in the respective cases of part load and deep part load. The current expectations are that new Francis turbines present appropriate hydraulic stability and moderate pressure pulsations at overload, part load, deep part load and speed-no-load with high efficiency levels at normal operating range. This study presents series of investigations performed by Voith Hydro with the objective to improve the hydraulic stability of Francis turbines at overload, part load and deep part load, reduce pressure pulsations and enlarge the know-how about the transient fluid flow through the turbine at these challenging conditions. Model test measurements showed that distinct runner designs were able to influence the pressure pulsation level in the machine. Extensive experimental investigations focused on the runner deflector geometry, on runner features and how they could reduce the pressure oscillation level. The impact of design variants and machine configurations on the vortex rope at the draft tube cone at overload and part load and on the runner channel vortex at deep part load were experimentally observed and evaluated based on the measured pressure pulsation amplitudes. Numerical investigations were employed for improving the understanding of such dynamic fluid flow effects. As example for the design and experimental investigations, model test observations and pressure pulsation curves for Francis machines in mid specific speed range, around n qopt = 50

  15. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  16. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  17. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  18. A SHARC based ROB Complex : design and measurement results

    CERN Document Server

    Boterenbrood, H; Kieft, G; Scholte, R; Slopsema, R; Vermeulen, J C

    2000-01-01

    ROB hardware, based on and exploiting the properties of the SHARC DSP and of FPGAs, and the associated software are described. Results from performance measurements and an analysis of the results for a single ROBIn as well as for a ROB Complex with up to 4 ROBIns are presented.

  19. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  20. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  1. Convergence models for cylindrical caverns and the resulting ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Haupt, W.; Sroka, A.; Schober, F.

    1983-02-01

    The authors studied the effects of different convergence characteristics on surface soil response for the case of narrow, cylindrical caverns. Maximum ground subsidence - a parameter of major importance in this type of cavern - was calculated for different convergence models. The models were established without considering the laws of rock mechanics and rheology. As a result, two limiting convergence models were obtained that describe an interval of expectation into which all other models fit. This means that ground movements over cylindrical caverns can be calculated ''on the safe side'', correlating the trough resulting on the surface with the convergence characterisitcs of the cavern. Among other applications, the method thus permits monitoring of caverns.

  2. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    . However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties......The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced...

  3. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    ’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent......The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble...

  4. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  5. Hydroclimatology of the Nile: results from a regional climate model

    Directory of Open Access Journals (Sweden)

    Y. A. Mohamed

    2005-01-01

    Full Text Available This paper presents the result of the regional coupled climatic and hydrologic model of the Nile Basin. For the first time the interaction between the climatic processes and the hydrological processes on the land surface have been fully coupled. The hydrological model is driven by the rainfall and the energy available for evaporation generated in the climate model, and the runoff generated in the catchment is again routed over the wetlands of the Nile to supply moisture for atmospheric feedback. The results obtained are quite satisfactory given the extremely low runoff coefficients in the catchment. The paper presents the validation results over the sub-basins: Blue Nile, White Nile, Atbara river, the Sudd swamps, and the Main Nile for the period 1995 to 2000. Observational datasets were used to evaluate the model results including radiation, precipitation, runoff and evaporation data. The evaporation data were derived from satellite images over a major part of the Upper Nile. Limitations in both the observational data and the model are discussed. It is concluded that the model provides a sound representation of the regional water cycle over the Nile. The sources of atmospheric moisture to the basin, and location of convergence/divergence fields could be accurately illustrated. The model is used to describe the regional water cycle in the Nile basin in terms of atmospheric fluxes, land surface fluxes and land surface-climate feedbacks. The monthly moisture recycling ratio (i.e. locally generated/total precipitation over the Nile varies between 8 and 14%, with an annual mean of 11%, which implies that 89% of the Nile water resources originates from outside the basin physical boundaries. The monthly precipitation efficiency varies between 12 and 53%, and the annual mean is 28%. The mean annual result of the Nile regional water cycle is compared to that of the Amazon and the Mississippi basins.

  6. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  7. Results from Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, K.

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  8. Results From Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  9. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  10. Atomic Action Refinement in Model Based Testing

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    2007-01-01

    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation

  11. Dynamic Ligand Based Pharmacophore Modeling and Virtual ...

    Indian Academy of Sciences (India)

    user

    ChEMBL-HIV https://www.ebi.ac.uk/chembl/index.php/assay/results (ChEMBL Bioassay was searched with the key word “Human. Immunodefficiency virus”, display bioactivity option was chosen and then “Download All Bioactivity Data” was ... Ligand based pharmacophore models generated from the crystal structure and.

  12. Marginal production in the Gulf of Mexico - II. Model results

    International Nuclear Information System (INIS)

    Kaiser, Mark J.; Yu, Yunke

    2010-01-01

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  13. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  14. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  15. Wave-current interactions: model development and preliminary results

    Science.gov (United States)

    Mayet, Clement; Lyard, Florent; Ardhuin, Fabrice

    2013-04-01

    The coastal area concentrates many uses that require integrated management based on diagnostic and predictive tools to understand and anticipate the future of pollution from land or sea, and learn more about natural hazards at sea or activity on the coast. The realistic modelling of coastal hydrodynamics needs to take into account various processes which interact, including tides, surges, and sea state (Wolf [2008]). These processes act at different spatial scales. Unstructured-grid models have shown the ability to satisfy these needs, given that a good mesh resolution criterion is used. We worked on adding a sea state forcing in a hydrodynamic circulation model. The sea state model is the unstructured version of WAVEWATCH III c (Tolman [2008]) (which version is developed at IFREMER, Brest (Ardhuin et al. [2010]) ), and the hydrodynamic model is the 2D barotropic module of the unstructured-grid finite element model T-UGOm (Le Bars et al. [2010]). We chose to use the radiation stress approach (Longuet-Higgins and Stewart [1964]) to represent the effect of surface waves (wind waves and swell) in the barotropic model, as previously done by Mastenbroek et al. [1993]and others. We present here some validation of the model against academic cases : a 2D plane beach (Haas and Warner [2009]) and a simple bathymetric step with analytic solution for waves (Ardhuin et al. [2008]). In a second part we present realistic application in the Ushant Sea during extreme event. References Ardhuin, F., N. Rascle, and K. Belibassakis, Explicit wave-averaged primitive equations using a generalized Lagrangian mean, Ocean Modelling, 20 (1), 35-60, doi:10.1016/j.ocemod.2007.07.001, 2008. Ardhuin, F., et al., Semiempirical Dissipation Source Functions for Ocean Waves. Part I: Definition, Calibration, and Validation, J. Phys. Oceanogr., 40 (9), 1917-1941, doi:10.1175/2010JPO4324.1, 2010. Haas, K. A., and J. C. Warner, Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and

  16. Towards a results-based performance management: practices and ...

    African Journals Online (AJOL)

    Towards a results-based performance management: practices and challenges in the Ethiopian public sector. ... Journal of Business and Administrative Studies ... Findings of this study indicate that performance management system is disconnected at the top that weakened accountability of managers in the public sector.

  17. Recent results in mirror based high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Nielsen, Jakob Skov; Elvang, Mads

    2004-01-01

    In this paper, recent results in high power laser cutting, obtained in reseach and development projects are presented. Two types of mirror based focussing systems for laser cutting have been developed and applied in laser cutting studies on CO2-lasers up to 12 kW. In shipyard environment cutting...

  18. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.

  19. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  20. Fuel assembly bow: analytical modeling and resulting design improvements

    International Nuclear Information System (INIS)

    Stabel, J.; Huebsch, H.P.

    1995-01-01

    The bowing of fuel assemblies may result in a contact between neighbouring fuel assemblies and in connection with a vibration to a resulting wear or even perforation at the corners of the spacer grids of neighbouring assemblies. Such events allowed reinsertion of a few fuel assemblies in Germany only after spacer repair. In order to identify the most sensitive parameters causing the observed bowing of fuel assemblies a new computer model was develop which takes into a account the highly nonlinear behaviour of the interaction between fuel rods and spacers. As a result of the studies performed with this model, design improvements such as a more rigid connection between guide thimbles and spacer grids, could be defined. First experiences with this improved design show significantly better fuel behaviour. (author). 5 figs., 1 tabs

  1. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  2. The Integrated Fuzzy AHP and Goal Programing Model Based on LCA Results for Industrial Waste Management by Using the Nearest Weighted Approximation of FN: Aluminum Industry in Arak, Iran

    Directory of Open Access Journals (Sweden)

    Ramin Zare

    2016-01-01

    Full Text Available The worldwide recycled aluminum generation is increasing quickly thanks to the environmental considerations and continuous growing of use demands. Aluminum dross recycling, as the secondary aluminum process, has been always considered as a problematic issue in the world. The aim of this work is to propose a methodical and easy procedure for the proposed system selection as the MCDM problem. Here, an evaluation method, integrated FAHP, is presented to evaluate aluminum waste management systems. Therefore, we drive weights of each pair comparison matrix by the use of the goal programming (GP model. The functional unit includes aluminum dross and aluminum scrap, which is defined as 1000 kilograms. The model is confirmed in the case of aluminum waste management in Arak. For the proposed integrated fuzzy AHP model, five alternatives are investigated. The results showed that, according to the selected attributes, the best waste management alternative is the one involving the primary aluminum ingot 99.5% including 200 kg and the secondary aluminum 98% (scrap including 800 kg, and beneficiation activities are implemented, duplicate aluminum dross is recycled in the plant, and finally it is landfilled.

  3. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  4. Modeling vertical loads in pools resulting from fluid injection. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-06-15

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.

  5. Modeling vertical loads in pools resulting from fluid injection

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena

  6. Some results on the dynamics generated by the Bazykin model

    Directory of Open Access Journals (Sweden)

    Georgescu, R M

    2006-07-01

    Full Text Available A predator-prey model formerly proposed by A. Bazykin et al. [Bifurcation diagrams of planar dynamical systems (1985] is analyzed in the case when two of the four parameters are kept fixed. Dynamics and bifurcation results are deduced by using the methods developed by D. K. Arrowsmith and C. M. Place [Ordinary differential equations (1982], S.-N. Chow et al. [Normal forms and bifurcation of planar fields (1994], Y. A. Kuznetsov [Elements of applied bifurcation theory (1998], and A. Georgescu [Dynamic bifurcation diagrams for some models in economics and biology (2004]. The global dynamic bifurcation diagram is constructed and graphically represented. The biological interpretation is presented, too.

  7. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  8. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  9. Risk-based technical specifications program: Site interview results

    International Nuclear Information System (INIS)

    Andre, G.R.; Baker, A.J.; Johnson, R.L.

    1991-08-01

    The Electric Power Research Institute and Pacific Gas and Electric Company are sponsoring a program directed at improving Technical Specifications using risk-based methods. The major objectives of the program are to develop risk-based approaches to improve Technical Specifications and to develop an Interactive Risk Advisor (IRA) prototype. The IRA is envisioned as an interactive system that is available to plant personnel to assist in controlling plant operation. Use of an IRA is viewed as a method to improve plant availability while maintaining or improving plant safety. In support of the program, interviews were conducted at several PWR and BWR plant sites, to elicit opinions and information concerning risk-based approaches to Technical Specifications and IRA requirements. This report presents the results of these interviews, including the functional requirements of an IRA. 2 refs., 6 figs., 2 tabs

  10. Some Results On The Modelling Of TSS Manufacturing Lines

    Directory of Open Access Journals (Sweden)

    Viorel MÎNZU

    2000-12-01

    Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.

  11. Interaction between subducting plates: results from numerical and analogue modeling

    Science.gov (United States)

    Kiraly, Agnes; Capitanio, Fabio A.; Funiciello, Francesca; Faccenna, Claudio

    2016-04-01

    The tectonic setting of the Alpine-Mediterranean area is achieved during the late Cenozoic subduction, collision and suturing of several oceanic fragments and continental blocks. In this stage, processes such as interactions among subducting slabs, slab migrations and related mantle flow played a relevant role on the resulting tectonics. Here, we use numerical models to first address the mantle flow characteristic in 3D. During the subduction of a single plate the strength of the return flow strongly depends on the slab pull force, that is on the plate's buoyancy, however the physical properties of the slab, such as density, viscosity or width, do not affect largely the morphology of the toroidal cell. Instead, dramatic effects on the geometry and the dynamics of the toroidal cell result in models where the thickness of the mantle is varied. The vertical component of the vorticity vector is used to define the characteristic size of the toroidal cell, which is ~1.2-1.3 times the mantle depth. This latter defines the range of viscous stress propagation through the mantle and consequent interactions with other slabs. We thus further investigate on this setup where two separate lithospheric plates subduct in opposite sense, developing opposite polarities and convergent slab retreat, and model different initial sideways distance between the plates. The stress profiles in time illustrate that the plates interacts when slabs are at the characteristic distance and the two slabs toroidal cells merge. Increased stress and delayed slab migrations are the results. Analogue models of double-sided subduction show similar maximum distance and allow testing the additional role of stress propagated through the plates. We use a silicon plate subducting on its two opposite margins, which is either homogeneous or comprises oceanic and continental lithospheres, differing in buoyancy. The modeling results show that the double-sided subduction is strongly affected by changes in plate

  12. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  13. Workshop to transfer VELMA watershed model results to ...

    Science.gov (United States)

    An EPA Western Ecology Division (WED) watershed modeling team has been working with the Snoqualmie Tribe Environmental and Natural Resources Department to develop VELMA watershed model simulations of the effects of historical and future restoration and land use practices on streamflow, stream temperature, and other habitat characteristics affecting threatened salmon populations in the 100 square mile Tolt River watershed in Washington state. To date, the WED group has fully calibrated the watershed model to simulate Tolt River flows with a high degree of accuracy under current and historical conditions and practices, and is in the process of simulating long-term responses to specific watershed restoration practices conducted by the Snoqualmie Tribe and partners. On July 20-21 WED Researchers Bob McKane, Allen Brookes and ORISE Fellow Jonathan Halama will be attending a workshop at the Tolt River site in Carnation, WA, to present and discuss modeling results with the Snoqualmie Tribe and other Tolt River watershed stakeholders and land managers, including the Washington Departments of Ecology and Natural Resources, U.S. Forest Service, City of Seattle, King County, and representatives of the Northwest Indian Fisheries Commission. The workshop is being co-organized by the Snoqualmie Tribe, EPA Region 10 and WED. The purpose of this 2-day workshop is two-fold. First, on Day 1, the modeling team will perform its second site visit to the watershed, this time focus

  14. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    International Nuclear Information System (INIS)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.

    2013-08-01

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  15. Some results on hyperscaling in the 3D Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Baker, G.A. Jr. [Los Alamos National Lab., NM (United States). Theoretical Div.; Kawashima, Naoki [Univ. of Tokyo (Japan). Dept. of Physics

    1995-09-01

    The authors review exact studies on finite-sized 2 dimensional Ising models and show that the point for an infinite-sized model at the critical temperature is a point of nonuniform approach in the temperature-size plane. They also illuminate some strong effects of finite-size on quantities which do not diverge at the critical point. They then review Monte Carlo studies for 3 dimensional Ising models of various sizes (L = 2--100) at various temperatures. From these results they find that the data for the renormalized coupling constant collapses nicely when plotted against the correlation length, determined in a system of edge length L, divided by L. They also find that {zeta}{sub L}/L {ge} 0.26 is definitely too large for reliable studies of the critical value, g*, of the renormalized coupling constant. They have reasonable evidence that {zeta}{sub L}/L {approx} 0.1 is adequate for results that are within one percent of those for the infinite system size. On this basis, they have conducted a series of Monte Carlo calculations with this condition imposed. These calculations were made practical by the development of improved estimators for use in the Swendsen-Wang cluster method. The authors found from these results, coupled with a reversed limit computation (size increases with the temperature fixed at the critical temperature), that g* > 0, although there may well be a sharp downward drop in g as the critical temperature is approached in accord with the predictions of series analysis. The results support the validity of hyperscaling in the 3 dimensional Ising model.

  16. Presenting results of software model checker via debugging interface

    OpenAIRE

    Kohan, Tomáš

    2012-01-01

    Title: Presenting results of software model checker via debugging interface Author: Tomáš Kohan Department: Department of Software Engineering Supervisor of the master thesis: RNDr. Ondřej Šerý, Ph.D., Department of Distributed and Dependable Systems Abstract: This thesis is devoted to design and implementation of the new debugging interface of the Java PathFinder application. As a suitable inte- face container was selected the Eclipse development environment. The created interface should vis...

  17. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  18. Review of Current Standard Model Results in ATLAS

    CERN Document Server

    Brandt, Gerhard; The ATLAS collaboration

    2018-01-01

    This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.

  19. Thermal-Chemical Model Of Subduction: Results And Tests

    Science.gov (United States)

    Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.

    2005-12-01

    Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.

  20. Sensor-based interior modeling

    International Nuclear Information System (INIS)

    Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.

    1995-01-01

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation

  1. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Test results judgment method based on BIT faults

    Directory of Open Access Journals (Sweden)

    Wang Gang

    2015-12-01

    Full Text Available Built-in-test (BIT is responsible for equipment fault detection, so the test data correctness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integration, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  3. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  4. Differential Geometry Based Multiscale Models

    Science.gov (United States)

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  5. Loss of spent fuel pool cooling PRA: Model and results

    Energy Technology Data Exchange (ETDEWEB)

    Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.

    1996-09-01

    This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 {times} 10{sup {minus}5} and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 {times} 10{sup {minus}3}. Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible.

  6. Loss of spent fuel pool cooling PRA: Model and results

    International Nuclear Information System (INIS)

    Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.

    1996-09-01

    This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible

  7. CIM5 Phase III base process development results

    International Nuclear Information System (INIS)

    Witt, D.C.

    2000-01-01

    Integrated Demonstration Runs for the Am/Cm vitrification process were initiated in the Coupled 5-inch Cylindrical Induction Melter (CIM5) on 11/30/98 and completed on 12/9/98. Four successful runs at 60 wt% lanthanide loading were completed which met or exceeded all established criteria. The operating parameters used in these runs established the base conditions for the 5-inch Cylindrical Induction Melter (CIM5) process and were summarized in the 5-inch CIM design basis, SRT-AMC-99-OO01. (1) In subsequent tests, a total of fourteen CIM5 runs were performed using various power inputs, ramp rates and target temperatures to define the preferred processing conditions (2) Process stability and process flexibility were the key criteria used in assessing the results for each run. A preferred set of operating parameters was defined for the CIM5 batch process and these conditions were used to generate a pre-programmed, automatic processing cycle that was used for the last six CIM.5 runs (3) These operational tests were successfully completed in the January-February time frame and were summarized in SRT-AMC-99-00584. The recommended set of operating conditions defined in Runs No.1 through No.14 was used as the starting point for further pilot system runs to determine the robustness of the process, evaluate a bubbler, and investigate off-normal conditions. CIM5 Phase III Runs No.15 through No.60 were conducted utilizing the pre-programmed, automatic processing cycle to investigate system performance. This report summarizes the results of these tests and provides a recommendation for the base process as well as a processing modification for minimizing volume expansions if americium and/or curium are subject to a thermal reduction reaction like cerium. This document summarizes the results of the base process development tests conducted in the Am/Cm Pilot Facility located in Building 672-T

  8. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  9. Interpretation of EQA results and EQA-based trouble shooting.

    Science.gov (United States)

    Kristensen, Gunn Berit Berge; Meijer, Piet

    2017-02-15

    Important objectives of External Quality Assessment (EQA) is to detect analytical errors and make corrective actions. The aim of this paper is to describe knowledge required to interpret EQA results and present a structured approach on how to handle deviating EQA results. The value of EQA and how the EQA result should be interpreted depends on five key points: the control material, the target value, the number of replicates, the acceptance limits and between lot variations in reagents used in measurement procedures. This will also affect the process of finding the sources of errors when they appear. The ideal EQA sample has two important properties: it behaves as a native patient sample in all methods (is commutable) and has a target value established with a reference method. If either of these two criteria is not entirely fulfilled, results not related to the performance of the laboratory may arise. To help and guide the laboratories in handling a deviating EQA result, the Norwegian Clinical Chemistry EQA Program (NKK) has developed a flowchart with additional comments that could be used by the laboratories e.g. in their quality system, to document action against deviations in EQA. This EQA-based trouble-shooting tool has been developed further in cooperation with the External quality Control for Assays and Tests (ECAT) Foundation. This flowchart will become available in a public domain, i.e. the website of the European organisation for External Quality Assurance Providers in Laboratory Medicine (EQALM).

  10. Feature-driven model-based segmentation

    Science.gov (United States)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  11. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  12. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  13. Electrochemistry-based Battery Modeling for Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Kulkarni, Chetan Shrikant

    2013-01-01

    Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.

  14. A Visual Attention Model Based Image Fusion

    OpenAIRE

    Rishabh Gupta; M.R.Vimala Devi; M. Devi

    2013-01-01

    To develop an efficient image fusion algorithm based on visual attention model for images with distinct objects. Image fusion is a process of combining complementary information from multiple images of the same scene into an image, so that the resultant image contains a more accurate description of the scene than any of the individual source images. The two basic fusion techniques are pixel level and region level fusion. Pixel level fusion deals with the operations on each and every pixel sep...

  15. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  16. PARTICIPATION BASED MODEL OF SHIP CREW MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Toni Bielić

    2014-10-01

    Full Text Available 800x600 This paper analyse the participation - based model on board the ship as possibly optimal leadership model existing in the shipping industry with accent on decision - making process. In the paper authors have tried to define master’s behaviour model and management style identifying drawbacks and disadvantages of vertical, pyramidal organization with master on the top. Paper describes efficiency of decision making within team organization and optimization of a ship’s organisation by introducing teamwork on board the ship. Three examples of the ship’s accidents are studied and evaluated through “Leader - participation” model. The model of participation based management as a model of the teamwork has been applied in studying the cause - and - effect of accidents with the critical review of the communication and managing the human resources on a ship. The results have showed that the cause of all three accidents is the autocratic behaviour of the leaders and lack of communication within teams. Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4

  17. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    Science.gov (United States)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.

  18. Constraint-Based Model Weaving

    Science.gov (United States)

    White, Jules; Gray, Jeff; Schmidt, Douglas C.

    Aspect-oriented modeling (AOM) is a promising technique for untangling the concerns of complex enterprise software systems. AOM decomposes the crosscutting concerns of a model into separate models that can be woven together to form a composite solution model. In many domains, such as multi-tiered e-commerce web applications, separating concerns is much easier than deducing the proper way to weave the concerns back together into a solution model. For example, modeling the types and sizes of caches that can be leveraged by a Web application is much easier than deducing the optimal way to weave the caches back into the solution architecture to achieve high system throughput.

  19. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  20. Cardiometabolic results from an armband-based weight loss trial

    Directory of Open Access Journals (Sweden)

    Sieverdes JC

    2011-05-01

    Full Text Available John C Sieverdes, Xuemei Sui, Gregory A Hand, Vaughn W Barry, Sara Wilcox, Rebecca A Meriwether, James W Hardin, Amanda C McClain, Steven N BlairDepartment of Exercise Science, University of South Carolina, Columbia, SC, USAPurpose: This report examines the blood chemistry and blood pressure (BP results from the Lifestyle Education for Activity and Nutrition (LEAN study, a randomized weight loss trial. A primary purpose of the study was to evaluate the effects of real-time self-monitoring of energy balance (using the SenseWearTM Armband, BodyMedia, Inc Pittsburgh, PA on these health factors.Methods: 164 sedentary overweight or obese adults (46.8 ± 10.8 years; BMI 33.3 ± 5.2 kg/m2; 80% women took part in the 9-month study. Participants were randomized into 4 conditions: a standard care condition with an evidence-based weight loss manual (n = 40, a group-based behavioral weight loss program (n = 44, an armband alone condition (n = 41, and a group plus armband (n = 39 condition. BP, fasting blood lipids and glucose were measured at baseline and 9 months.Results: 99 participants (60% completed both baseline and follow-up measurements for BP and blood chemistry analysis. Missing data were handled by baseline carried forward. None of the intervention groups had significant changes in blood lipids or BP when compared to standard care after adjustment for covariates, though within-group lowering was found for systolic BP in group and group + armband conditions, a rise in total cholesterol and LDL were found in standard care and group conditions, and a lowering of triglycerides was found in the two armband conditions. Compared with the standard care condition, fasting glucose decreased significantly for participants in the group, armband, and group + armband conditions (all P < 0.05, respectively.Conclusion: Our results suggest that using an armband program is an effective strategy to decrease fasting blood glucose. This indicates that devices, such as

  1. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  2. Satellite-based terrestrial production efficiency modeling

    Directory of Open Access Journals (Sweden)

    Obersteiner Michael

    2009-09-01

    Full Text Available Abstract Production efficiency models (PEMs are based on the theory of light use efficiency (LUE which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP monitoring. The objectives of this review are as follows: 1 to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS identified in the literature; 2 to review each model to determine potential improvements to the general PEM methodology; 3 to review the related literature on satellite-based gross primary productivity (GPP and NPP modeling for additional possibilities for improvement; and 4 based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra; there is an urgent need for

  3. Satellite-based terrestrial production efficiency modeling.

    Science.gov (United States)

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-09-18

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research.This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling.Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass

  4. On Process Modelling Using Physical Oriented And Phenomena Based Principles

    Directory of Open Access Journals (Sweden)

    Mihai Culea

    2000-12-01

    Full Text Available This work presents a modelling framework based on phenomena description of the process. The approach is taken to easy understand and construct process model in heterogeneous possible distributed modelling and simulation environments. A simplified case study of a heat exchanger is considered and Modelica modelling language to check the proposed concept. The partial results are promising and the research effort will be extended in a computer aided modelling environment based on phenomena.

  5. Results-Based Organization Design for Technology Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Chris McPhee

    2012-05-01

    Full Text Available Faced with considerable uncertainty, entrepreneurs would benefit from clearly defined objectives, a plan to achieve these objectives (including a reasonable expectation that this plan will work, as well as a means to measure progress and make requisite course corrections. In this article, the author combines the benefits of results-based management with the benefits of organization design to describe a practical approach that technology entrepreneurs can use to design their organizations so that they deliver desired outcomes. This approach links insights from theory and practice, builds logical connections between entrepreneurial activities and desired outcomes, and measures progress toward those outcomes. This approach also provides a mechanism for entrepreneurs to make continual adjustments and improvements to their design and direction in response to data, customer and stakeholder feedback, and changes in their business environment.

  6. Experimental Results of Rover-Based Coring and Caching

    Science.gov (United States)

    Backes, Paul G.; Younse, Paulo; DiCicco, Matthew; Hudson, Nicolas; Collins, Curtis; Allwood, Abigail; Paolini, Robert; Male, Cason; Ma, Jeremy; Steele, Andrew; hide

    2011-01-01

    Experimental results are presented for experiments performed using a prototype rover-based sample coring and caching system. The system consists of a rotary percussive coring tool on a five degree-of-freedom manipulator arm mounted on a FIDO-class rover and a sample caching subsystem mounted on the rover. Coring and caching experiments were performed in a laboratory setting and in a field test at Mono Lake, California. Rock abrasion experiments using an abrading bit on the coring tool were also performed. The experiments indicate that the sample acquisition and caching architecture is viable for use in a 2018 timeframe Mars caching mission and that rock abrasion using an abrading bit may be feasible in place of a dedicated rock abrasion tool.

  7. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  8. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  9. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  10. Process-Based Modeling of Constructed Wetlands

    Science.gov (United States)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  11. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  12. The design, results and future development of the National Energy Strategy Environmental Analysis Model (NESEAM)

    International Nuclear Information System (INIS)

    Fisher, R.E.; Boyd, G.A.; Breed, W.S.

    1991-01-01

    The National Energy Strategy Environmental Model (NESEAM) has been developed to project emissions for the National Energy Strategy (NES). Two scenarios were evaluated for the NES, a Current Policy Base Case and a NES Action Case. The results from the NES Actions Case project much lower emissions than the Current Policy Base Case. Future enhancements to NESEAM will focus on fuel cycle analysis, including future technologies and additional pollutants to model. NESEAM's flexibility will allow it to model other future legislative issues. 7 refs., 4 figs., 2 tabs

  13. Refinement-Based Student Modeling and Automated Bug Library Construction.

    Science.gov (United States)

    Baffes, Paul; Mooney, Raymond

    1996-01-01

    Discussion of student modeling and intelligent tutoring systems focuses on the development of the ASSERT algorithm (Acquiring Stereotypical Student Errors by Refining Theories). Topics include overlay modeling; bug libraries (databases of student misconceptions); dynamic modeling; refinement-based modeling; and experimental results from tests at…

  14. Results of EPRI/ANL DCH investigations and model development

    International Nuclear Information System (INIS)

    Spencer, B.W.; Sienicki, J.J.; Sehgal, B.R.; Merilo, M.

    1988-01-01

    The results of a series of five experiments are described addressing the severity and mitigation of direct containment heating. The tests were performed in a 1:30 linear scale mockup of the Zion PWR containment system using a reactor-material corium melt consisting of 60% UO 2 , 16% ZrO 2 , 24% SSt at nominally 2800C initial temperature. A ''worst-case'' type test involving unimpeded corium dispersal through an air atmosphere in a closed vessel produced an atmosphere heatup of 323K, equivalent to a DCH efficiency of 62%. With the addition of structural features which impeded the corium dispersal, representative of dispersal pathway features at Zion, the DCH efficiency was reduced to 1--5%. (This important result is scale dependent and requires larger scale tests such as the SURTSEY program at SNL plus mechanistic modeling for application to the reactor system.) With the addition of water in the cavity region, there was no measurable heatup of the atmosphere. This was attributable to the vigorous codispersal of water with corium which prevented the temperature of the atmosphere from significantly exceeding T/sub sat/. In this case the DCH load was replaced by the more benign ''steam spike'' from corium quench. Significant oxidation of the corium constituents occurred in the tests, adding chemical energy to the system and producing hydrogen. Overall, the results suggest that with consideration of realistic, plant specific, mitigating features, DCH may be no worse and possibly far less severe than the previously examined steam spike. Implications for accident management are addressed. 17 refs., 7 figs., 4 tabs

  15. EPID based in vivo dosimetry system: clinical experience and results.

    Science.gov (United States)

    Celi, Sofia; Costa, Emilie; Wessels, Claas; Mazal, Alejandro; Fourquet, Alain; Francois, Pascal

    2016-05-08

    Mandatory in several countries, in vivo dosimetry has been recognized as one of the next milestones in radiation oncology. Our department has implemented clinically an EPID based in vivo dosimetry system, EPIgray, by DOSISOFT S.A., since 2006. An analysis of the measurements per linac and energy over a two-year period was performed, which included a more detailed examination per technique and treat-ment site over a six-month period. A comparison of the treatment planning system doses and the doses estimated by EPIgray shows a mean of the differences of 1.9% (± 5.2%) for the two-year period. The 3D conformal treatment plans had a mean dose difference of 2.0% (± 4.9%), while for intensity-modulated radiotherapy and volumetric-modulated arc therapy treatments the mean dose difference was -3.0 (± 5.3%) and -2.5 (± 5.2%), respectively. In addition, root cause analyses were conducted on the in vivo dosimetry measurements of two breast cancer treatment techniques, as well as prostate treatments with intensity-modulated radiotherapy and volumetric-modulated arc therapy. During the breast study, the dose differences of breast treatments in supine position were correlated to patient setup and EPID positioning errors. Based on these observations, an automatic image shift correc-tion algorithm is developed by DOSIsoft S.A. The prostate study revealed that beams and arcs with out-of-tolerance in vivo dosimetry results tend to have more complex modulation and a lower exposure of the points of interest. The statistical studies indicate that in vivo dosimetry with EPIgray has been successfully imple-mented for classical and complex techniques in clinical routine at our institution. The additional breast and prostate studies exhibit the prospects of EPIgray as an easy supplementary quality assurance tool. The validation, the automatization, and the reduction of false-positive results represent an important step toward adaptive radiotherapy with EPIgray.

  16. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...

  17. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.

  18. A physiologically based model for denitrogenation kinetics

    Directory of Open Access Journals (Sweden)

    Ira Katz

    2017-01-01

    Full Text Available Under normal conditions we continuously breathe 78% nitrogen (N2 such that the body tissues and fluids are saturated with dissolved N2. For normobaric medical gas administration at high concentrations, the N2 concentration must be less than that in the ambient atmosphere; therefore, nitrogen will begin to be released by the body tissues. There is a need to estimate the time needed for denitrogenation in the planning of surgical procedures. In this paper we will describe the application of a physiologically based pharmacokinetic model to denitrogenation kinetics. The results are compared to the data resulting from experiments in the literature that measured the end tidal N2 concentration while breathing 100% oxygen in the form of moderately rapid and slow compartment time constants. It is shown that the model is in general agreement with published experimental data. Correlations for denitrogenation as a function of subject weight are provided.

  19. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  20. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  1. The Culture Based Model: Constructing a Model of Culture

    Science.gov (United States)

    Young, Patricia A.

    2008-01-01

    Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…

  2. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  3. Bridge Testing With Ground-Based Interferometric Radar: Experimental Results

    Science.gov (United States)

    Chiara, P.; Morelli, A.

    2010-05-01

    The research of innovative non-contact techniques aimed at the vibration measurement of civil engineering structures (also for damage detection and structural health monitoring) is continuously directed to the optimization of measures and methods. Ground-Based Radar Interferometry (GBRI) represents the more recent technique available for static and dynamic control of structures and ground movements. Dynamic testing of bridges and buildings in operational conditions are currently performed: (a) to assess the conformity of the structure to the project design at the end of construction; (b) to identify the modal parameters (i.e. natural frequencies, mode shapes and damping ratios) and to check the variation of any modal parameters over the years; (c) to evaluate the amplitude of the structural response to special load conditions (i.e. strong winds, earthquakes, heavy railway or roadway loads). If such tests are carried out by using a non-contact technique (like GBRI), the classical issues of contact sensors (like accelerometers) are easily overtaken. This paper presents and discusses the results of various tests carried out on full-scale bridges by using a Stepped Frequency-Continuous Wave radar system.

  4. Building a Global Groundwater Model fromScratch - Concepts and Results

    Science.gov (United States)

    Reinecke, R.; Song, Q.; Foglia, L.; Mehl, S.; Doll, P. M.

    2016-12-01

    To represent groundwater-surface water interactions as well as the impact of capillary rise on evapotranspiration in global-scale hydrological models, it is necessary to simulate the location and temporal variation of the groundwater table. This requires to replace simulation of groundwater dynamics by calculating groundwater storage variations in individual grid cells (independent from the storage variation in neighboring cells) by hydraulic head gradient-based groundwater modeling. Based on the experience of two research groups who have published different approaches for global-scale groundwater modeling, we present first results of our effort to develop a transient global groundwater model that is to replace the simple storage-based ground-water module of the global hydrological model WaterGAP. The following three technical and conceptual aspects of this endeavour arediscussed: (1) A software engineering approach to build a new hydraulic head based global groundwater model from scratch with the goal of maximizing performance and extensibility. (2) Comparison to other model approaches and their inherent problems. (3) Global-data deficits and how to deal with them. Furthermore, this poster presents and discusses first results and provides an outlook on future developments.

  5. Waste glass corrosion modeling: Comparison with experimental results

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-01-01

    Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations

  6. Néron models and base change

    CERN Document Server

    Halle, Lars Halvard

    2016-01-01

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions of abelian varieties. The final chapter contains a list of challenging open questions. This book is a...

  7. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  8. Argonne Fuel Cycle Facility ventilation system -- modeling and results

    International Nuclear Information System (INIS)

    Mohr, D.; Feldman, E.E.; Danielson, W.F.

    1995-01-01

    This paper describes an integrated study of the Argonne-West Fuel Cycle Facility (FCF) interconnected ventilation systems during various operations. Analyses and test results include first a nominal condition reflecting balanced pressures and flows followed by several infrequent and off-normal scenarios. This effort is the first study of the FCF ventilation systems as an integrated network wherein the hydraulic effects of all major air systems have been analyzed and tested. The FCF building consists of many interconnected regions in which nuclear fuel is handled, transported and reprocessed. The ventilation systems comprise a large number of ducts, fans, dampers, and filters which together must provide clean, properly conditioned air to the worker occupied spaces of the facility while preventing the spread of airborne radioactive materials to clean am-as or the atmosphere. This objective is achieved by keeping the FCF building at a partial vacuum in which the contaminated areas are kept at lower pressures than the other worker occupied spaces. The ventilation systems of FCF and the EBR-II reactor are analyzed as an integrated totality, as demonstrated. We then developed the network model shown in Fig. 2 for the TORAC code. The scope of this study was to assess the measured results from the acceptance/flow balancing testing and to predict the effects of power failures, hatch and door openings, single-failure faulted conditions, EBR-II isolation, and other infrequent operations. The studies show that the FCF ventilation systems am very controllable and remain stable following off-normal events. In addition, the FCF ventilation system complex is essentially immune to reverse flows and spread of contamination to clean areas during normal and off-normal operation

  9. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  10. Model-Based Enterprise Summit Report

    Science.gov (United States)

    2014-02-01

    Models Become Much More Efficient and Effective When Coupled With Knowledge Design Advisors CAD Fit Machine Motion KanBan Trigger Models Tolerance...Based Enterprise Geometry Kinematics Design Advisors Control Physics Planning System Models CAD Fit Machine Motion KanBan Trigger Models Tolerance

  11. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  12. Innovation ecosystem model for commercialization of research results

    Directory of Open Access Journals (Sweden)

    Vlăduţ Gabriel

    2017-07-01

    Full Text Available Innovation means Creativity and Added value recognise by the market. The first step in creating a sustainable commercialization of research results, Technological Transfer – TT mechanism, on one hand is to define the “technology” which will be transferred and on other hand to define the context in which the TT mechanism work, the ecosystem. The focus must be set on technology as an entity, not as a science or a study of the practical industrial arts and certainly not any specific applied science. The transfer object, the technology, must rely on a subjectively determined but specifiable set of processes and products. Focusing on the product is not sufficient to the transfer and diffusion of technology. It is not merely the product that is transferred but also knowledge of its use and application. The innovation ecosystem model brings together new companies, experienced business leaders, researchers, government officials, established technology companies, and investors. This environment provides those new companies with a wealth of technical expertise, business experience, and access to capital that supports innovation in the early stages of growth.

  13. Some important results from the air pollution distribution model STACKS (1988-1992)

    International Nuclear Information System (INIS)

    Erbrink, J.J.

    1993-01-01

    Attention is paid to the results of the study on the distribution of air pollutants by high chimney-stacks of electric power plants. An important product of the study is the integrated distribution model STACKS (Short Term Air-pollutant Concentrations Kema modelling System). The improvements and the extensions of STACKS are described in relation to the National Model, which has been used to estimate the environmental effects of individual chimney-stacks. The National Model shows unacceptable variations for high pollutant sources. Based on the results of STACKS revision of the National model has been taken into consideration. By means of the revised National Model a more realistic estimation of the environmental effects of electric power plants can be carried out

  14. Model based risk assessment - the CORAS framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune; Thunem, Atoosa P-J.

    2004-04-15

    Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

  15. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  16. Model-based explanation of plant knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Huuskonen, P.J. [VTT Electronics, Oulu (Finland). Embedded Software

    1997-12-31

    This thesis deals with computer explanation of knowledge related to design and operation of industrial plants. The needs for explanation are motivated through case studies and literature reviews. A general framework for analysing plant explanations is presented. Prototypes demonstrate key mechanisms for implementing parts of the framework. Power plants, steel mills, paper factories, and high energy physics control systems are studied to set requirements for explanation. The main problems are seen to be either lack or abundance of information. Design knowledge in particular is found missing at plants. Support systems and automation should be enhanced with ways to explain plant knowledge to the plant staff. A framework is formulated for analysing explanations of plant knowledge. It consists of three parts: 1. a typology of explanation, organised by the class of knowledge (factual, functional, or strategic) and by the target of explanation (processes, automation, or support systems), 2. an identification of explanation tasks generic for the plant domain, and 3. an identification of essential model types for explanation (structural, behavioural, functional, and teleological). The tasks use the models to create the explanations of the given classes. Key mechanisms are discussed to implement the generic explanation tasks. Knowledge representations based on objects and their relations form a vocabulary to model and present plant knowledge. A particular class of models, means-end models, are used to explain plant knowledge. Explanations are generated through searches in the models. Hypertext is adopted to communicate explanations over dialogue based on context. The results are demonstrated in prototypes. The VICE prototype explains the reasoning of an expert system for diagnosis of rotating machines at power plants. The Justifier prototype explains design knowledge obtained from an object-oriented plant design tool. Enhanced access mechanisms into on-line documentation are

  17. Blade element momentum modeling of inflow with shear in comparison with advanced model results

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Riziotis, V.; Zahle, Frederik

    2012-01-01

    shear is present in the inflow. This gives guidance to how the BEM modeling of shear should be implemented. Another result from the advanced vortex model computations is a clear indication of influence of the ground, and the general tendency is a speed up effect of the flow through the rotor giving...

  18. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  19. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F.

    1999-07-01

    A dynamical downscaling is presented that allows an estimation of potential effects of climate change on the North Sea. Therefore, the ocean general circulation model OPYC is adapted for application on a shelf by adding a lateral boundary formulation and a tide model. In this set-up the model is forced, first, with data from the ECMWF reanalysis for model validation and the study of the natural variability, and, second, with data from climate change experiments to estimate the effects of climate change on the North Sea. (orig.)

  20. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  1. Particle-based model for skiing traffic.

    Science.gov (United States)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  2. Particle-based model for skiing traffic

    Science.gov (United States)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  3. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  4. CSPBuilder - CSP based Scientific Workflow Modelling

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Vinter, Brian

    2008-01-01

    This paper introduces a framework for building CSP based applications, targeted for clusters and next generation CPU designs. CPUs are produced with several cores today and every future CPU generation will feature increasingly more cores, resulting in a requirement for concurrency that has...... not previously been called for. The framework is CSP presented as a scienti¿c work¿ow model, specialized for scienti¿c computing applications. The purpose of the framework is to enable scientists to exploit large parallel computation resources, which has previously been hard due of the dif¿culty of concurrent...

  5. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  6. Effect of geometry of rice kernels on drying modeling results

    Science.gov (United States)

    Geometry of rice grain is commonly represented by sphere, spheroid or ellipsoid shapes in the drying models. Models using simpler shapes are easy to solve mathematically, however, deviation from the true grain shape might lead to large errors in predictions of drying characteristics such as, moistur...

  7. Spinal cord stimulation: modeling results and clinical data

    NARCIS (Netherlands)

    Struijk, Johannes J.; Struijk, J.J.; Holsheimer, J.; Barolat, Giancarlo; He, Jiping

    1992-01-01

    The potential distribution in volume couductor models of the spinal cord at cervical, midthoracic and lowthoracic levels, due to epidural stimulation, was calculated. Treshold stimuli of modeled myelhated dorsal column and dorsal root fibers were calculated and were compared with perception

  8. Quark cluster model of nuclei and lepton scattering results

    International Nuclear Information System (INIS)

    Vary, J.P.; Iowa State Univ. of Science and Technology, Ames

    1984-01-01

    A review of the quark cluster model (QCM) of nuclei is presented along with applications to deep inelastic lepton scattering and elastic lepton scattering experiments. In addition a sample comparison is made with high momentum transfer (p, π) data. The QCM prediction for the ratio of nuclear structure functions in the x > 1 domain is discussed as a critical test of the model

  9. How to: understanding SWAT model uncertainty relative to measured results

    Science.gov (United States)

    Watershed models are being relied upon to contribute to most policy-making decisions of watershed management, and the demand for an accurate accounting of complete model uncertainty is rising. Generalized likelihood uncertainty estimation (GLUE) is a widely used method for quantifying uncertainty i...

  10. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  11. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  12. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.

  13. A Correlation-Based Transition Model using Local Variables. Part 1; Model Formation

    Science.gov (United States)

    Menter, F. R.; Langtry, R. B.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

    2006-01-01

    A new correlation-based transition model has been developed, which is based strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) approaches, such as unstructured grids and massive parallel execution. The model is based on two transport equations, one for intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models) but from a framework for the implementation of correlation-based models into general-purpose CFD methods.

  14. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  15. Family-based hip-hop to health: outcome results.

    Science.gov (United States)

    Fitzgibbon, Marian L; Stolley, Melinda R; Schiffer, Linda; Kong, Angela; Braunschweig, Carol L; Gomez-Perez, Sandra L; Odoms-Young, Angela; Van Horn, Linda; Christoffel, Katherine Kaufer; Dyer, Alan R

    2013-02-01

    This pilot study tested the feasibility of Family-Based Hip-Hop to Health, a school-based obesity prevention intervention for 3-5-year-old Latino children and their parents, and estimated its effectiveness in producing smaller average changes in BMI at 1-year follow-up. Four Head Start preschools administered through the Chicago Public Schools were randomly assigned to receive a Family-Based Intervention (FBI) or a General Health Intervention (GHI). Parents signed consent forms for 147 of the 157 children enrolled. Both the school-based and family-based components of the intervention were feasible, but attendance for the parent intervention sessions was low. Contrary to expectations, a downtrend in BMI Z-score was observed in both the intervention and control groups. While the data reflect a downward trend in obesity among these young Hispanic children, obesity rates remained higher at 1-year follow-up (15%) than those reported by the National Health and Nutrition Examination Survey (2009-2010) for 2-5-year-old children (12.1%). Developing evidence-based strategies for obesity prevention among Hispanic families remains a challenge. Copyright © 2012 The Obesity Society.

  16. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  17. Exoplanets -New Results from Space and Ground-based Surveys

    Science.gov (United States)

    Udry, Stephane

    The exploration of the outer solar system and in particular of the giant planets and their environments is an on-going process with the Cassini spacecraft currently around Saturn, the Juno mission to Jupiter preparing to depart and two large future space missions planned to launch in the 2020-2025 time frame for the Jupiter system and its satellites (Europa and Ganymede) on the one hand, and the Saturnian system and Titan on the other hand [1,2]. Titan, Saturn's largest satellite, is the only other object in our Solar system to possess an extensive nitrogen atmosphere, host to an active organic chemistry, based on the interaction of N2 with methane (CH4). Following the Voyager flyby in 1980, Titan has been intensely studied from the ground-based large telescopes (such as the Keck or the VLT) and by artificial satellites (such as the Infrared Space Observatory and the Hubble Space Telescope) for the past three decades. Prior to Cassini-Huygens, Titan's atmospheric composition was thus known to us from the Voyager missions and also through the explorations by the ISO. Our perception of Titan had thus greatly been enhanced accordingly, but many questions remained as to the nature of the haze surrounding the satellite and the composition of the surface. The recent revelations by the Cassini-Huygens mission have managed to surprise us with many discoveries [3-8] and have yet to reveal more of the interesting aspects of the satellite. The Cassini-Huygens mission to the Saturnian system has been an extraordinary success for the planetary community since the Saturn-Orbit-Insertion (SOI) in July 2004 and again the very successful probe descent and landing of Huygens on January 14, 2005. One of its main targets was Titan. Titan was revealed to be a complex world more like the Earth than any other: it has a dense mostly nitrogen atmosphere and active climate and meteorological cycles where the working fluid, methane, behaves under Titan conditions the way that water does on

  18. New analytic results for speciation times in neutral models.

    Science.gov (United States)

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  19. Box photosynthesis modeling results for WRF/CMAQ LSM

    Data.gov (United States)

    U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....

  20. Model-based phase-shifting interferometer

    Science.gov (United States)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  1. Physiologically Based Pharmacokinetic (PBPK) Modeling of ...

    Science.gov (United States)

    Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, inter-individual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data.Objectives: To evaluate the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and one hybrid mouse strains to calibrate and extend existing physiologically-based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). A Bayesian population analysis of inter-strain variability was used to quantify variability in TCE metabolism. Results: Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation was less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production was less variable (2-fold range) than DCA production (5-fold range), although uncertainty bounds for DCA exceeded the predicted variability. Conclusions:

  2. New Results in Optical Modelling of Quantum Well Solar Cells

    Directory of Open Access Journals (Sweden)

    Silvian Fara

    2012-01-01

    Full Text Available This project brought further advancements to the quantum well solar cell concept proposed by Keith Barnham. In this paper, the optical modelling of MQW solar cells was analyzed and we focussed on the following topics: (i simulation of the refraction index and the reflectance, (ii simulation of the absorption coefficient, (iii simulation of the quantum efficiency for the absorption process, (iv discussion and modelling of the quantum confinement effect, and (v evaluation of datasheet parameters of the MQW cell.

  3. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so ...

  4. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  5. Point Cloud Based Visibility Analysis : first experimental results

    NARCIS (Netherlands)

    Zhang, G.; van Oosterom, P.J.M.; Verbree, E.; Bregt, Arnold; Sarjakoski, Tapani; Lammeren, Ron van; Rip, Frans

    2017-01-01

    Visibility computed from a LiDAR point cloud offers several advantages compared to using a gridded digital height-model. With a higher resolution and detailed information, point cloud data can provide precise analysis as well as an opportunity to avoid the process of generating a surface

  6. Model-Based Fault Tolerant Control

    Science.gov (United States)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  7. The animal model determines the results of Aeromonas virulence factors

    Directory of Open Access Journals (Sweden)

    Alejandro Romero

    2016-10-01

    Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study

  8. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  9. Python-Based Applications for Hydrogeological Modeling

    Science.gov (United States)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The

  10. Interpretation of EQA results and EQA-based trouble shooting

    OpenAIRE

    Kristensen, Gunn Berit Berge; Meijer, Piet

    2017-01-01

    Important objectives of External Quality Assessment (EQA) is to detect analytical errors and make corrective actions. The aim of this paper is to describe knowledge required to interpret EQA results and present a structured approach on how to handle deviating EQA results. The value of EQA and how the EQA result should be interpreted depends on five key points: the control material, the target value, the number of replicates, the acceptance limits and between lot variations in reagents used in...

  11. A New Explanation and Proof of the Paradoxical Scoring Results in Multidimensional Item Response Models.

    Science.gov (United States)

    Jordan, Pascal; Spiess, Martin

    2017-10-13

    In multidimensional item response models, paradoxical scoring effects can arise, wherein correct answers are penalized and incorrect answers are rewarded. For the most prominent class of IRT models, the class of linearly compensatory models, a general derivation of paradoxical scoring effects based on the geometry of item discrimination vectors is given, which furthermore corrects an error in an established theorem on paradoxical results. This approach highlights the very counterintuitive way in which item discrimination parameters (and also factor loadings) have to be interpreted in terms of their influence on the latent ability estimate. It is proven that, despite the error in the original proof, the key result concerning the existence of paradoxical effects remains true-although the actual relation to the item parameters is shown to be a more complicated function than previous results suggested. The new proof enables further insights into the actual mathematical causation of the paradox and generalizes the findings within the class of linearly compensatory models.

  12. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  13. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  14. Modeling the radiation transfer of discontinuous canopies: results for gap probability and single-scattering contribution

    Science.gov (United States)

    Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun

    2010-10-01

    In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.

  15. Mars 2020 Model Based Systems Engineering Pilot

    Science.gov (United States)

    Dukes, Alexandra Marie

    2017-01-01

    The pilot study is led by the Integration Engineering group in NASA's Launch Services Program (LSP). The Integration Engineering (IE) group is responsible for managing the interfaces between the spacecraft and launch vehicle. This pilot investigates the utility of Model-Based Systems Engineering (MBSE) with respect to managing and verifying interface requirements. The main objectives of the pilot are to model several key aspects of the Mars 2020 integrated operations and interface requirements based on the design and verification artifacts from Mars Science Laboratory (MSL) and to demonstrate how MBSE could be used by LSP to gain further insight on the interface between the spacecraft and launch vehicle as well as to enhance how LSP manages the launch service. The method used to accomplish this pilot started through familiarization of SysML, MagicDraw, and the Mars 2020 and MSL systems through books, tutorials, and NASA documentation. MSL was chosen as the focus of the model since its processes and verifications translate easily to the Mars 2020 mission. The study was further focused by modeling specialized systems and processes within MSL in order to demonstrate the utility of MBSE for the rest of the mission. The systems chosen were the In-Flight Disconnect (IFD) system and the Mass Properties process. The IFD was chosen as a system of focus since it is an interface between the spacecraft and launch vehicle which can demonstrate the usefulness of MBSE from a system perspective. The Mass Properties process was chosen as a process of focus since the verifications for mass properties occur throughout the lifecycle and can demonstrate the usefulness of MBSE from a multi-discipline perspective. Several iterations of both perspectives have been modeled and evaluated. While the pilot study will continue for another 2 weeks, pros and cons of using MBSE for LSP IE have been identified. A pro of using MBSE includes an integrated view of the disciplines, requirements, and

  16. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  17. Physiologically based quantitative modeling of unihemispheric sleep.

    Science.gov (United States)

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-07

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell...

  19. Model-Based Design for Embedded Systems

    CERN Document Server

    Nicolescu, Gabriela

    2009-01-01

    Model-based design allows teams to start the design process from a high-level model that is gradually refined through abstraction levels to ultimately yield a prototype. This book describes the main facets of heterogeneous system design. It focuses on multi-core methodological issues, real-time analysis, and modeling and validation

  20. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  1. Some results for the dynamic (s, S) inventory model *

    NARCIS (Netherlands)

    H.C. Tijms

    1971-01-01

    textabstractSummary The periodic review, single item, stationary (s, S) inventory model is considered. There is a fixed lead time, a linear purchase cost, a fixed set‐up cost, a holding and shortage cost function, a discount factor 0 < α≤ 1 and backlogging of unfilled demand. The solution for the

  2. Recent numerical results on the two dimensional Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E. (SISSA, Trieste (Italy))

    1989-12-01

    A new method for simulating strongly correlated fermionic systems, has been applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution. (orig.).

  3. Analytical results for the Sznajd model of opinion formation

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Lavička, H.

    2003-01-01

    Roč. 35, - (2003), s. 279-288 ISSN 1434-6028 R&D Projects: GA ČR GA202/01/1091 Institutional research plan: CEZ:AV0Z1010914 Keywords : agent models * sociophysics Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2003

  4. A MYSQL-BASED DATA ARCHIVER: PRELIMINARY RESULTS

    International Nuclear Information System (INIS)

    Matthew Bickley; Christopher Slominski

    2008-01-01

    Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator's user community, a prototyping effort was executed to determine if an archiver based on MySQL had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisition and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and development of an operational system. The system is in its operational testing phase now. This paper discusses the archiver system requirements, some of the design choices and their rationale, and presents the acquisition, storage and retrieval performance

  5. Performance Results of CMMI-Based Process Improvement

    National Research Council Canada - National Science Library

    Gibson, Diane L; Goldenson, Dennis R; Kost, Keith

    2006-01-01

    .... There now is evidence that process improvement using the CMMI Product Suite can result in improvements in schedule and cost performance, product quality, return on investment and other measures of performance outcome...

  6. Modelling of the earth atmosphere contamination as result of cesium 137 deflation from contaminated territories

    International Nuclear Information System (INIS)

    Zhmura, G.M.; Zhmura, N.V.

    1998-01-01

    The results of calculation of cesium 137 average annual ground atmosphere concentrations on the Belarus territory in the knots of net (50*50) km are given. The calculations were made on the base of a model notions about dusting area sources. Analysis of the results shows that cesium 137 average annual ground atmosphere concentrations on the Belarus territory are varied more than two orders depending on a point of calculation from 1 to 400 micro Bq/m 3

  7. Core damage frequency (reactor design) perspectives based on IPE results

    International Nuclear Information System (INIS)

    Camp, A.L.; Dingman, S.E.; Forester, J.A.

    1996-01-01

    This paper provides perspectives gained from reviewing 75 Individual Plant Examination (IPE) submittals covering 108 nuclear power plant units. Variability both within and among reactor types is examined to provide perspectives regarding plant-specific design and operational features, and C, modeling assumptions that play a significant role in the estimates of core damage frequencies in the IPEs. Human actions found to be important in boiling water reactors (BWRs) and in pressurized water reactors (PWRs) are presented and the events most frequently found important are discussed

  8. Use of the LQ model with large fraction sizes results in underestimation of isoeffect doses

    International Nuclear Information System (INIS)

    Sheu, Tommy; Molkentine, Jessica; Transtrum, Mark K.; Buchholz, Thomas A.; Withers, Hubert Rodney; Thames, Howard D.; Mason, Kathy A.

    2013-01-01

    Purpose: To test the appropriateness of the linear-quadratic (LQ) model to describe survival of jejunal crypt clonogens after split doses with variable (small 1–6 Gy, large 8–13 Gy) first dose, as a model of its appropriateness for both small and large fraction sizes. Methods: C3Hf/KamLaw mice were exposed to whole body irradiation using 300 kVp X-rays at a dose rate of 1.84 Gy/min, and the number of viable jejunal crypts was determined using the microcolony assay. 14 Gy total dose was split into unequal first and second fractions separated by 4 h. Data were analyzed using the LQ model, the lethal potentially lethal (LPL) model, and a repair-saturation (RS) model. Results: Cell kill was greater in the group receiving the larger fraction first, creating an asymmetry in the plot of survival vs size of first dose, as opposed to the prediction of the LQ model of a symmetric response. There was a significant difference in the estimated βs (higher β after larger first doses), but no significant difference in the αs, when large doses were given first vs small doses first. This difference results in underestimation (based on present data by approximately 8%) of isoeffect doses using LQ model parameters based on small fraction sizes. While the LPL model also predicted a symmetric response inconsistent with the data, the RS model results were consistent with the observed asymmetry. Conclusion: The LQ model underestimates doses for isoeffective crypt-cell survival with large fraction sizes (in the present setting, >9 Gy)

  9. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2000-07-01

    A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and

  10. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  11. Characteristic-Based, Task-Based, and Results-Based: Three Value Systems for Assessing Professionally Produced Technical Communication Products.

    Science.gov (United States)

    Carliner, Saul

    2003-01-01

    Notes that technical communicators have developed different methodologies for evaluating the effectiveness of their work, such as editing, usability testing, and determining the value added. Explains that at least three broad value systems underlie the assessment practices: characteristic-based, task-based, and results-based. Concludes that the…

  12. Analysis of inelastic neutron scattering results on model compounds ...

    Indian Academy of Sciences (India)

    Keywords. Vibrational spectroscopy; nitrogenous bases; inelastic neutron scattering. PACS No. 63.20. 1. .... Where, Bz[x(y)] implies that this indole mode has x% of the benzene mode number y (after [10]); similarly .... the momentum transfer vector, Q, is essentially parallel to the incident beam for all energy transfers, at least ...

  13. 1-g model loading tests: methods and results

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav

    1999-01-01

    Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering

  14. Sharing brain mapping statistical results with the neuroimaging data model

    Science.gov (United States)

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.

    2016-01-01

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621

  15. DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2016-01-01

    Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of

  16. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  17. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  18. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  19. MCNP Modeling Results for Location of Buried TRU Waste Drums

    International Nuclear Information System (INIS)

    Steinman, D K; Schweitzer, J S

    2006-01-01

    In the 1960's, fifty-five gallon drums of TRU waste were buried in shallow pits on remote U.S. Government facilities such as the Idaho National Engineering Laboratory (now split into the Idaho National Laboratory and the Idaho Completion Project [ICP]). Subsequently, it was decided to remove the drums and the material that was in them from the burial pits and send the material to the Waste Isolation Pilot Plant in New Mexico. Several technologies have been tried to locate the drums non-intrusively with enough precision to minimize the chance for material to be spread into the environment. One of these technologies is the placement of steel probe holes in the pits into which wireline logging probes can be lowered to measure properties and concentrations of material surrounding the probe holes for evidence of TRU material. There is also a concern that large quantities of volatile organic compounds (VOC) are also present that would contaminate the environment during removal. In 2001, the Idaho National Engineering and Environmental Laboratory (INEEL) built two pulsed neutron wireline logging tools to measure TRU and VOC around the probe holes. The tools are the Prompt Fission Neutron (PFN) and the Pulsed Neutron Gamma (PNG), respectively. They were tested experimentally in surrogate test holes in 2003. The work reported here estimates the performance of the tools using Monte-Carlo modelling prior to field deployment. A MCNP model was constructed by INEEL personnel. It was modified by the authors to assess the ability of the tools to predict quantitatively the position and concentration of TRU and VOC materials disposed around the probe holes. The model was used to simulate the tools scanning the probe holes vertically in five centimetre increments. A drum was included in the model that could be placed near the probe hole and at other locations out to forty-five centimetres from the probe-hole in five centimetre increments. Scans were performed with no chlorine in the

  20. Solar activity variations of ionosonde measurements and modeling results

    Czech Academy of Sciences Publication Activity Database

    Altadill, D.; Arrazola, D.; Blanch, E.; Burešová, Dalia

    2008-01-01

    Roč. 42, č. 4 (2008), s. 610-616 ISSN 0273-1177 R&D Projects: GA AV ČR 1QS300120506 Grant - others:MCYT(ES) REN2003-08376-C02-02; CSIC(XE) 2004CZ0002; AGAUR(XE) 2006BE00112; AF Research Laboratory(XE) FA8718-L-0072 Institutional research plan: CEZ:AV0Z30420517 Keywords : mid-latitude ionosphere * bottomside modeling * ionospheric variability Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.860, year: 2008 http://www.sciencedirect.com/science/journal/02731177

  1. Human action perspectives based on individual plant examination results

    International Nuclear Information System (INIS)

    Forester, J.; Thompson, C.; Drouin, M.; Lois, E.

    1996-01-01

    This paper provides perspectives on human actions gained from reviewing 76 individual plant examination (IPE) submittals. Human actions found to be important in boiling water reactors (BWRs) and in pressurized water reactors (PWRs) are presented and the events most frequently found important are discussed. Since there are numerous factors that can influence the quantification of human error probabilities (HEPs) and introduce significant variability in the resulting HEPs (which in turn can influence which events are found to be important), the variability in HEPs for similar events across IPEs is examined to assess the extent to which variability in results is due to real versus artifactual differences. Finally, similarities and differences in human action observations across BWRs and PWRs are examined

  2. [Pulsed electromagnetic fields (PEMF)--results in evidence based medicine].

    Science.gov (United States)

    Pieber, Karin; Schuhfried, Othmar; Fialka-Moser, Veronika

    2007-01-01

    Therapy with electromagnetic fields has a very old tradition in medicine. The indications are widespread, whereas little is known about the effects. Controlled randomizied studies with positive results for pulsed electromagnetic fields (PEMF) are available for osteotomies, the healing of skin wounds, and osteoarthritis. Comparison of the studies is difficult because of the different doses applied and intervals of therapy. Therefore recommendations regarding an optimal dosis and interval are, depending on the disease, quite variable.

  3. Severe accident progression perspectives based on IPE results

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Drouin, M.

    1996-01-01

    Accident progression perspectives were gathered from the level 2 PRA analyses (the analysis of the accident after core damage has occurred involving the containment performance and the radionuclide release from the containment) described in the IPE submittals. Insights related to the containment failure modes, the releases associated with those failure modes, and the factors responsible for the types of containment failures and release sizes reported were obtained. Complete results are discussed in NUREG-1560 and summarized here

  4. Business Process Modelling based on Petri nets

    Directory of Open Access Journals (Sweden)

    Qin Jianglong

    2017-01-01

    Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.

  5. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Holmes, Aimee E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lee, Kearn P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelly, Steven E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-01-01

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  6. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Holmes, Aimee E.; Heredia-Langner, Alejandro

    2013-09-18

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  7. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  8. Instance-Based Generative Biological Shape Modeling.

    Science.gov (United States)

    Peng, Tao; Wang, Wei; Rohde, Gustavo K; Murphy, Robert F

    2009-01-01

    Biological shape modeling is an essential task that is required for systems biology efforts to simulate complex cell behaviors. Statistical learning methods have been used to build generative shape models based on reconstructive shape parameters extracted from microscope image collections. However, such parametric modeling approaches are usually limited to simple shapes and easily-modeled parameter distributions. Moreover, to maximize the reconstruction accuracy, significant effort is required to design models for specific datasets or patterns. We have therefore developed an instance-based approach to model biological shapes within a shape space built upon diffeomorphic measurement. We also designed a recursive interpolation algorithm to probabilistically synthesize new shape instances using the shape space model and the original instances. The method is quite generalizable and therefore can be applied to most nuclear, cell and protein object shapes, in both 2D and 3D.

  9. Use of Physiologically Based Pharmacokinetic (PBPK) Models ...

    Science.gov (United States)

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.

  10. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...

  11. Geometric deviation modeling by kinematic matrix based on Lagrangian coordinate

    Science.gov (United States)

    Liu, Weidong; Hu, Yueming; Liu, Yu; Dai, Wanyi

    2015-09-01

    Typical representation of dimension and geometric accuracy is limited to the self-representation of dimension and geometric deviation based on geometry variation thinking, yet the interactivity affection of geometric variation and gesture variation of multi-rigid body is not included. In this paper, a kinematic matrix model based on Lagrangian coordinate is introduced, with the purpose of unified model for geometric variation and gesture variation and their interactive and integrated analysis. Kinematic model with joint, local base and movable base is built. The ideal feature of functional geometry is treated as the base body; the fitting feature of functional geometry is treated as the adjacent movable body; the local base of the kinematic model is fixed onto the ideal geometry, and the movable base of the kinematic model is fixed onto the fitting geometry. Furthermore, the geometric deviation is treated as relative location or rotation variation between the movable base and the local base, and it's expressed by the Lagrangian coordinate. Moreover, kinematic matrix based on Lagrangian coordinate for different types of geometry tolerance zones is constructed, and total freedom for each kinematic model is discussed. Finally, the Lagrangian coordinate library, kinematic matrix library for geometric deviation modeling is illustrated, and an example of block and piston fits is introduced. Dimension and geometric tolerances of the shaft and hole fitting feature are constructed by kinematic matrix and Lagrangian coordinate, and the results indicate that the proposed kinematic matrix is capable and robust in dimension and geometric tolerances modeling.

  12. Student Entrepreneurship in Hungary: Selected Results Based on GUESSS Survey

    Directory of Open Access Journals (Sweden)

    Andrea S. Gubik

    2016-12-01

    Full Text Available Objective: This study investigates students’ entrepreneurial activities and aims to answer questions regarding to what extent do students utilize the knowledge gained during their studies and the personal connections acquired at universities, as well as what role a family business background plays in the development of students’ business start-ups. Research Design & Methods: This paper is based on the database of the GUESSS project investigates 658 student entrepreneurs (so-called ‘active entrepreneurs’ who have already established businesses of their own. Findings: The rate of self-employment among Hungarian students who study in tertiary education and consider themselves to be entrepreneurs is high. Motivations and entrepreneurial efforts differ from those who owns a larger company, they do not necessarily intend to make an entrepreneurial path a career option in the long run. A family business background and family support play a determining role in entrepreneurship and business start-ups, while entrepreneurial training and courses offered at higher institutions are not reflected in students’ entrepreneurial activities. Implications & Recommendations: Universities should offer not only conventional business courses (for example, business planning, but also new forms of education so that students meet various entrepreneurial tasks and problems, make decisions in different situations, explore and acquaint themselves with entrepreneurship. Contribution & Value Added: The study provides literature overview of youth entrepreneurship, describes the main characteristics of students’ enterprises and contributes to understanding the factors of youth entrepreneurship.

  13. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  14. Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages

    Directory of Open Access Journals (Sweden)

    Kevin Holder

    2017-10-01

    specification list and were analyzed in detail. As a second basis, the research method uses a conscious expansion of graph-based design languages towards their applicability for requirements management. This expansion allows the handling of requirements through a graph-based design language model. The first two results of the presented research consist of a model of the gear system and a detailed model of requirements, both modelled in a graph-based design language. Further results are generated by a combination of the two models into one holistic model.

  15. Combining forming results via weld models to powerful numerical assemblies

    NARCIS (Netherlands)

    Kose, K.; Rietman, Bert

    2004-01-01

    Forming simulations generally give satisfying results with respect to thinning, stresses, changed material properties and, with a proper springback calculation, the geometric form. The joining of parts by means of welding yields an extra change of the material properties and the residual stresses.

  16. Assessing flood risk at the global scale: model setup, results, and sensitivity

    International Nuclear Information System (INIS)

    Ward, Philip J; Jongman, Brenden; Weiland, Frederiek Sperna; Winsemius, Hessel C; Bouwman, Arno; Ligtvoet, Willem; Van Beek, Rens; Bierkens, Marc F P

    2013-01-01

    Globally, economic losses from flooding exceeded $19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP ($1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures. (letter)

  17. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  18. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  19. Wind Turbine Control: Robust Model Based Approach

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood

    . This is because, on the one hand, control methods can decrease the cost of energy by keeping the turbine close to its maximum efficiency. On the other hand, they can reduce structural fatigue and therefore increase the lifetime of the wind turbine. The power produced by a wind turbine is proportional...... to the square of its rotor radius, therefore it seems reasonable to increase the size of the wind turbine in order to capture more power. However as the size increases, the mass of the blades increases by cube of the rotor size. This means in order to keep structural feasibility and mass of the whole structure...... reasonable, the ratio of mass to size should be reduced. This trend results in more flexible structures. Control of the flexible structure of a wind turbine in a wind field with stochastic nature is very challenging. In this thesis we are examining a number of robust model based methods for wind turbine...

  20. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  1. New global ICT-based business models

    DEFF Research Database (Denmark)

    Universities. The book continues by describing, analyzing and showing how NEWGIBM was implemented in SMEs in different industrial companies/networks. Based on this effort, the researchers try to describe and analyze the current context, experience of NEWGIBM and finally the emerging scenarios of NEWGIBM...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020...

  2. Inquiry based learning as didactic model in distant learning

    OpenAIRE

    Rothkrantz, L.J.M.

    2015-01-01

    Recent years many universities are involved in development of Massive Open Online Courses (MOOCs). Unfortunately an appropriate didactic model for cooperated network learning is lacking. In this paper we introduce inquiry based learning as didactic model. Students are assumed to ask themselves questions interacting with a learning text. The model has been tested for students of DUT taking a MOOC in mathematics. The didactic model and test results are presented.

  3. The Multipole Plasma Trap-PIC Modeling Results

    Science.gov (United States)

    Hicks, Nathaniel; Bowman, Amanda; Godden, Katarina

    2017-10-01

    A radio-frequency (RF) multipole structure is studied via particle-in-cell computer modeling, to assess the response of quasi-neutral plasma to the imposed RF fields. Several regimes, such as pair plasma, antimatter plasma, and conventional (ion-electron) plasma are considered. In the case of equal charge-to-mass ratio of plasma species, the effects of the multipole field are symmetric between positive and negative particles. In the case of a charge-to-mass disparity, the multipole RF parameters (frequency, voltage, structure size) may be chosen such that the light species (e.g. electrons) is strongly confined, while the heavy species (e.g. positive ions) does not respond to the RF field. In this case, the trapped negative space charge creates a potential well that then traps the positive species. 2D and 3D particle-in-cell simulations of this concept are presented, to assess plasma response and trapping dependences on multipole order, consequences of the formation of an RF plasma sheath, and the effects of an axial magnetic field. The scalings of trapped plasma parameters are explored in each of the mentioned regimes, to guide the design of prospective experiments investigating each. Supported by U.S. NSF/DOE Partnership in Basic Plasma Science and Engineering Grant PHY-1619615.

  4. Modeling Framework and Results to Inform Charging Infrastructure Investments

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, Marc W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    The plug-in electric vehicle (PEV) market is experiencing rapid growth with dozens of battery electric (BEV) and plug-in hybrid electric (PHEV) models already available and billions of dollars being invested by automotive manufacturers in the PEV space. Electric range is increasing thanks to larger and more advanced batteries and significant infrastructure investments are being made to enable higher power fast charging. Costs are falling and PEVs are becoming more competitive with conventional vehicles. Moreover, new technologies such as connectivity and automation hold the promise of enhancing the value proposition of PEVs. This presentation outlines a suite of projects funded by the U.S. Department of Energy's Vehicle Technology Office to conduct assessments of the economic value and charging infrastructure requirements of the evolving PEV market. Individual assessments include national evaluations of PEV economic value (assuming 73M PEVs on the road in 2035), national analysis of charging infrastructure requirements (with community and corridor level resolution), and case studies of PEV ownership in Columbus, OH and Massachusetts.

  5. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  6. Results from the coupled Michigan MHD model and the Rice Convection Model

    Science.gov (United States)

    de Zeeuw, D.; Sazykin, S.; Wolf, R.; Gombosi, T.; Powell, K.

    A new high performance Rice Convection Model (RCM) has been coupled to the adaptive-grid Michigan MHD model (BATSRUS). This fully coupled code allows us to self-consistently simulate the physics in the inner and middle magnetosphere. A study will be presented of the basic characteristics of the inner and middle magnetosphere in the context of a single coupled-code run with steady inputs. The analysis will include region-2 currents, shielding of the inner magnetosphere, partial ring currents, pressure distribution, magnetic field inflation, and distribution of pV^gamma. The coupled-code simulation will be compared with results from RCM runs and algorithms.

  7. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  8. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  9. Updating Finite Element Model of a Wind Turbine Blade Section Using Experimental Modal Analysis Results

    Directory of Open Access Journals (Sweden)

    Marcin Luczak

    2014-01-01

    Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.

  10. Exploring model-based target discrimination metrics

    Science.gov (United States)

    Witus, Gary; Weathersby, Marshall

    2004-08-01

    Visual target discrimination has occurred when the observer can say "I see a target THERE!" and can designate the target location. Target discrimination occurs when a perceived shape is sufficiently similar one or more of the instances the observer has been trained on. Marr defined vision as "knowing what is where by seeing." Knowing "what" requires prior knowledge. Target discrimination requires model-based visual processing. Model-based signature metrics attempt to answer the question "to what extent does the target in the image resemble a training image?" Model-based signature metrics attempt to represent the effects of high-level top-down visual cognition, in addition to low-level bottom-up effects. Recent advances in realistic 3D target rendering and computer-vision object recognition have made model-based signature metrics more practical. The human visual system almost certainly does NOT use the same processing algorithms as computer vision object recognition, but some processing elements and the overall effects are similar. It remains to be determined whether model-based metrics explain the variance in human performance. The purpose of this paper is to explain and illustrate the model-based approach to signature metrics.

  11. Rule-based Modelling and Tunable Resolution

    Directory of Open Access Journals (Sweden)

    Russ Harmer

    2009-11-01

    Full Text Available We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.

  12. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  13. Agent-based modelling of cholera diffusion

    NARCIS (Netherlands)

    Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.

    2016-01-01

    This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse

  14. Modeling and knowledge acquisition processes using case-based inference

    Directory of Open Access Journals (Sweden)

    Ameneh Khadivar

    2017-03-01

    Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.

  15. Comparison of the 1981 INEL dispersion data with results from a number of different models

    Energy Technology Data Exchange (ETDEWEB)

    Lewellen, W S; Sykes, R I; Parker, S F

    1985-05-01

    The results from simulations by 12 different dispersion models are compared with observations from an extensive field experiment conducted by the Nuclear Regulatory Commission at the Idaho National Engineering Laboratory in July, 1981. Comparisons were made on the bases of hourly SF/sub 6/ samples taken at the surface, out to approximately 10 km from the 46 m release tower, both during and following 7 different 8-hour releases. Comparisons are also made for total integrated doses collected out to approximately 40 km. Three classes of models are used. Within the limited range appropriate for Class A models this data comparison shows that neither the puff models or the transport and diffusion models agree with the data any better than the simple Gaussian plume models. The puff and transport and diffusion models do show a slight edge in performance in comparison with the total dose over the extended range approximate for class B models. The best model results for the hourly samples show approximately 40% calculated within a factor of two when a 15/sup 0/ uncertainty in plume position is permitted and it is assumed that higher data samples may occur at stations between the actual sample sites. This is increased to 60% for the 12 hour integrated dose and 70% for the total integrated dose when the same performance measure is used. None of the models reproduce the observed patchy dose patterns. This patchiness is consistent with the discussion of the inherent uncertainty associated with time averaged plume observations contained in our companion reports on the scientific critique of available models.

  16. Stirling cryocooler test results and design model verification

    International Nuclear Information System (INIS)

    Shimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1990-01-01

    This paper reports on progress in developing a long-life Stirling cycle cryocooler for space borne applications. It presents the results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the regenerator design code used in its development. This machine achieved a cold-end temperature of 65 K while carrying a 1/2 Watt cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for both sweeping and sealing the machine working volumes. In addition, the double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. A PC compatible design code was developed for this design approach that calculates regenerator loss including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls

  17. Component-based event composition modeling for CPS

    Science.gov (United States)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  18. Gradient based filtering of digital elevation models

    DEFF Research Database (Denmark)

    Knudsen, Thomas; Andersen, Rune Carbuhn

    We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based on ...

  19. Accept & Reject Statement-Based Uncertainty Models

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); G. de Cooman; F. Hermans (Felienne)

    2015-01-01

    textabstractWe develop a framework for modelling and reasoning with uncertainty based on accept and reject statements about gambles. It generalises the frameworks found in the literature based on statements of acceptability, desirability, or favourability and clarifies their relative position. Next

  20. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  1. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  2. Modeling results for a linear simulator of a divertor

    Energy Technology Data Exchange (ETDEWEB)

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-06-23

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.

  3. Modeling results for a linear simulator of a divertor

    International Nuclear Information System (INIS)

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-01-01

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach ∼ 1 Gw/m 2 along the magnetic fieldlines and > 10 MW/m 2 on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report

  4. Exact results for survival probability in the multistate Landau-Zener model

    International Nuclear Information System (INIS)

    Volkov, M V; Ostrovsky, V N

    2004-01-01

    An exact formula is derived for survival probability in the multistate Landau-Zener model in the special case where the initially populated state corresponds to the extremal (maximum or minimum) slope of a linear diabatic potential curve. The formula was originally guessed by S Brundobler and V Elzer (1993 J. Phys. A: Math. Gen. 26 1211) based on numerical calculations. It is a simple generalization of the expression for the probability of diabatic passage in the famous two-state Landau-Zener model. Our result is obtained via analysis and summation of the entire perturbation theory series

  5. Verification of Structural Simulation Results of Metal-based Additive Manufacturing by Means of Neutron Diffraction

    Science.gov (United States)

    Krol, T. A.; Seidel, C.; Schilp, J.; Hofmann, M.; Gan, W.; Zaeh, M. F.

    Metal-based additive processes are characterized by numerous transient physical effects, which exhibit an adverse influence on the production result. Hence, various research approaches for the optimization of e. g. the structural part behavior exist for layered manufacturing. Increasingly, these approaches are based on the finite element analysis to be able to understand the complexity. Hereby it should be considered that the significance of the calculation results depends on the quality of modeling the process in the simulation environment. Based on a selected specimen, the current work demonstrates in which way the numerical accuracy of the residual stress state can be analyzed by utilizing the neutron diffraction. Thereby, different process parameter settings were examined.

  6. Improving the natural gas transporting based on the steady state simulation results

    International Nuclear Information System (INIS)

    Szoplik, Jolanta

    2016-01-01

    The work presents an example of practical application of gas flow modeling results in the network, that was obtained for the existing gas network and for real data about network load depending on the time of day and air temperature. The gas network load in network connections was estimated based on real data concerning gas consumption by customers and weather data in 2010, based on two-parametric model based on the number of degree-days of heating. The aim of this study was to elaborate a relationship between pressure and gas stream introduced into the gas network. It was demonstrated that practical application of elaborated relationship in gas reduction station allows for the automatic adjustment of gas pressure in the network to the volume of network load and maintenance of gas pressure in the whole network at possibly the lowest level. It was concluded based on the results obtained that such an approach allows to reduce the amount of gas supplied to the network by 0.4% of the annual network load. - Highlights: • Determination of the hourly nodal demand for gas by the consumers. • Analysis of the results of gas flow simulation in pipeline network. • Elaboration of the relationship between gas pressure and gas stream feeding the network. • Automatic gas pressure steering in the network depending on the network load. • Comparison of input gas pressure in the system without and with pressure steering.

  7. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  8. Mathematical Modeling in Tobacco Control Research: Initial Results From a Systematic Review.

    Science.gov (United States)

    Feirman, Shari P; Donaldson, Elisabeth; Glasser, Allison M; Pearson, Jennifer L; Niaura, Ray; Rose, Shyanika W; Abrams, David B; Villanti, Andrea C

    2016-03-01

    The US Food and Drug Administration has expressed interest in using mathematical models to evaluate potential tobacco policies. The goal of this systematic review was to synthesize data from tobacco control studies that employ mathematical models. We searched five electronic databases on July 1, 2013 to identify published studies that used a mathematical model to project a tobacco-related outcome and developed a data extraction form based on the ISPOR-SMDM Modeling Good Research Practices. We developed an organizational framework to categorize these studies and identify models employed across multiple papers. We synthesized results qualitatively, providing a descriptive synthesis of included studies. The 263 studies in this review were heterogeneous with regard to their methodologies and aims. We used the organizational framework to categorize each study according to its objective and map the objective to a model outcome. We identified two types of study objectives (trend and policy/intervention) and three types of model outcomes (change in tobacco use behavior, change in tobacco-related morbidity or mortality, and economic impact). Eighteen models were used across 118 studies. This paper extends conventional systematic review methods to characterize a body of literature on mathematical modeling in tobacco control. The findings of this synthesis can inform the development of new models and the improvement of existing models, strengthening the ability of researchers to accurately project future tobacco-related trends and evaluate potential tobacco control policies and interventions. These findings can also help decision-makers to identify and become oriented with models relevant to their work. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Model Based Vision for Aircraft Position Determination

    Science.gov (United States)

    Sridhar, Banavar; Chatterji, Gano B.; Soni, Tarun; Showman, Robert D. (Technical Monitor)

    1994-01-01

    This paper investigates the use of imaging sensors to estimate the position of an aircraft with respect to the runway during landing. Passive vision techniques to estimate aircraft position during landing rely on the known runway model, images acquired by onboard imaging sensor, orientation information provided by the inertial navigation system and the position estimate provided by devices such as the global positioning system. Point features in the runway model are compared with the onboard sensor images of the features and the difference between the two is used to correct the aircraft position and orientation. In this paper the sensitivity of point features is examined as a means of determining the accuracy of such position estimation techniques. Expressions are derived for the sensitivity of image point to errors in the position and orientation of the sensor. Using these, the sensitivity of the image to aircraft position and orientation errors along a typical landing glide path is studied. A least squares technique based on this sensitivity analysis is described for the correction of position and orientation estimates. The final version of the paper will include results from the application of this analysis to real image sequences collected in flight.

  10. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    Directory of Open Access Journals (Sweden)

    Tankiewicz Matylda

    2017-09-01

    Full Text Available The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay – a sedimentary soil occurring in central Poland which is characterized by the so-called “layered microstructure”. The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  11. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    Science.gov (United States)

    Tankiewicz, Matylda; Kawa, Marek

    2017-09-01

    The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay - a sedimentary soil occurring in central Poland which is characterized by the so-called "layered microstructure". The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  12. Multi-Domain Modeling Based on Modelica

    Directory of Open Access Journals (Sweden)

    Liu Jun

    2016-01-01

    Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.

  13. Satellite data for systematic validation of wave model results in the Black Sea

    Science.gov (United States)

    Behrens, Arno; Staneva, Joanna

    2017-04-01

    The Black Sea is with regard to the availability of traditional in situ wave measurements recorded by usual waverider buoys a data sparse semi-enclosed sea. The only possibility for systematic validations of wave model results in such a regional area is the use of satellite data. In the frame of the COPERNICUS Marine Evolution System for the Black Sea that requires wave predictions, the third-generation spectral wave model WAM is used. The operational system is demonstrated based on four years' systematic comparisons with satellite data. The aim of this investigation was to answer two questions. Is the wave model able to provide a reliable description of the wave conditions in the Black Sea and are the satellite measurements suitable for validation purposes on such a regional scale ? Detailed comparisons between measured data and computed model results for the Black Sea including yearly statistics have been done for about 300 satellite overflights per year. The results discussed the different verification schemes needed to review the forecasting skills of the operational system. The good agreement between measured and modeled data supports the expectation that the wave model provides reasonable results and that the satellite data is of good quality and offer an appropriate validation alternative to buoy measurements. This is the required step towards further use of those satellite data for assimilation into the wave fields to improve the wave predictions. Additional support for the good quality of the wave predictions is provided by comparisons between ADCP measurements that are available for a short time period in February 2012 and the corresponding model results at a location near the Bulgarian coast in the western Black Sea. Sensitivity tests with different wave model options and different driving wind fields have been done which identify the appropriate model configuration that provides the best wave predictions. In addition to the comparisons between measured

  14. Model-Drive Architecture for Agent-Based Systems

    Science.gov (United States)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  15. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.

  16. Agent-based model of soil water dynamics

    Science.gov (United States)

    Mewes, Benjamin; Schumann, Andreas

    2017-04-01

    In the last decade, agent based modelling became more and more popular in social science, biology and environmental modelling. The concept is designed to simulate systems that are highly dynamic and sensitive to small variations in their composition and their state. As hydrological systems often show dynamic and nonlinear behaviour, agent based modelling can be an adequate way to model aquatic systems. Nevertheless, up to now only a few results on agent based modelling are known in hydrology. Processes like the percolation of water through the soil are highly responsive to the state of the pedological system. To simulate these water fluxes correctly by known approaches like the Green-Ampt model or approximations to the Richards equation, small time steps and a high spatial discretisation are needed. In this study a new approach for modelling water fluxes in a soil column is presented: autonomous water agents that transport water through the soil while interacting with their environment as well as with other agents under physical laws. Setting up an agent-based model requires a predefined rule set for the behaviour of the autonomous agents. Moreover, we present some principle assumptions of the interaction not only between agents, but as well between agents and their environment. Our study shows that agent-based modelling in hydrology leads to very promising results but we also have to face new challenges.

  17. A Multiobjective Optimization Including Results of Life Cycle Assessment in Developing Biorenewables-Based Processes.

    Science.gov (United States)

    Helmdach, Daniel; Yaseneva, Polina; Heer, Parminder K; Schweidtmann, Artur M; Lapkin, Alexei A

    2017-09-22

    A decision support tool has been developed that uses global multiobjective optimization based on 1) the environmental impacts, evaluated within the framework of full life cycle assessment; and 2) process costs, evaluated by using rigorous process models. This approach is particularly useful in developing biorenewable-based energy solutions and chemicals manufacturing, for which multiple criteria must be evaluated and optimization-based decision-making processes are particularly attractive. The framework is demonstrated by using a case study of the conversion of terpenes derived from biowaste feedstocks into reactive intermediates. A two-step chemical conversion/separation sequence was implemented as a rigorous process model and combined with a life cycle model. A life cycle inventory for crude sulfate turpentine was developed, as well as a conceptual process of its separation into pure terpene feedstocks. The performed single- and multiobjective optimizations demonstrate the functionality of the optimization-based process development and illustrate the approach. The most significant advance is the ability to perform multiobjective global optimization, resulting in identification of a region of Pareto-optimal solutions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Results from source-based and detector-based calibrations of a CLARREO calibration demonstration system

    Science.gov (United States)

    Angal, Amit; McCorkel, Joel; Thome, Kurt

    2016-09-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMR's case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMR's calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5% (k=2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches.

  19. Teacher Training by Means of a School-Based Model

    Science.gov (United States)

    Richter, Barry

    2016-01-01

    The purpose of the study was to explore how a school-based training model (SBTM) could help to address the shortage of teachers. This model also allows, among other aspects, for poor and disadvantaged students to study while they gain experience. This article reports on the results of the SBTM implemented by a South African university, whereby…

  20. Event-based Simulation Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    De Raedt, H.; Michielsen, K.; Jaeger, G; Khrennikov, A; Schlosshauer, M; Weihs, G

    2011-01-01

    We present a corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one. The event-based corpuscular model gives a unified

  1. An introduction to model-based cognitive neuroscience

    NARCIS (Netherlands)

    Forstmann, B.U.; Wagenmakers, E.-J.

    2015-01-01

    Two recent innovations, the emergence of formal cognitive models and the addition of cognitive neuroscience data to the traditional behavioral data, have resulted in the birth of a new, interdisciplinary field of study: model-based cognitive neuroscience. Despite the increasing scientific interest

  2. Event-Based Corpuscular Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    Michielsen, K.; Jin, F.; Raedt, H. De

    A corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one is presented. The event-based corpuscular model is shown to give a

  3. Development of Ensemble Model Based Water Demand Forecasting Model

    Science.gov (United States)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  4. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  5. A Nursing Practice Model Based on Christ: The Agape Model.

    Science.gov (United States)

    Eckerd, Nancy

    2017-06-07

    Nine out of 10 American adults believe Jesus was a real person, and almost two-thirds have made a commitment to Jesus Christ. Research further supports that spiritual beliefs and religious practices influence overall health and well-being. Christian nurses need a practice model that helps them serve as kingdom nurses. This article introduces the Agape Model, based on the agape love and characteristics of Christ, upon which Christian nurses may align their practice to provide Christ-centered care.

  6. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  7. Ecosystem health pattern analysis of urban clusters based on emergy synthesis: Results and implication for management

    International Nuclear Information System (INIS)

    Su, Meirong; Fath, Brian D.; Yang, Zhifeng; Chen, Bin; Liu, Gengyuan

    2013-01-01

    The evaluation of ecosystem health in urban clusters will help establish effective management that promotes sustainable regional development. To standardize the application of emergy synthesis and set pair analysis (EM–SPA) in ecosystem health assessment, a procedure for using EM–SPA models was established in this paper by combining the ability of emergy synthesis to reflect health status from a biophysical perspective with the ability of set pair analysis to describe extensive relationships among different variables. Based on the EM–SPA model, the relative health levels of selected urban clusters and their related ecosystem health patterns were characterized. The health states of three typical Chinese urban clusters – Jing-Jin-Tang, Yangtze River Delta, and Pearl River Delta – were investigated using the model. The results showed that the health status of the Pearl River Delta was relatively good; the health for the Yangtze River Delta was poor. As for the specific health characteristics, the Pearl River Delta and Yangtze River Delta urban clusters were relatively strong in Vigor, Resilience, and Urban ecosystem service function maintenance, while the Jing-Jin-Tang was relatively strong in organizational structure and environmental impact. Guidelines for managing these different urban clusters were put forward based on the analysis of the results of this study. - Highlights: • The use of integrated emergy synthesis and set pair analysis model was standardized. • The integrated model was applied on the scale of an urban cluster. • Health patterns of different urban clusters were compared. • Policy suggestions were provided based on the health pattern analysis

  8. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  9. Modelling analysis and prediction of women javelin throw results in the years I946 — 2013

    Directory of Open Access Journals (Sweden)

    P Grycmann

    2016-01-01

    Full Text Available The main goals of our study of the women's javelin throw were twofold; first, to analyse the dynamics of female javelin throw results variability as a function of time (time period 1946-2014, second, to create a predictive model of the results during the upcoming 4 years. The study material consisted of databases covering the female track and field events obtained from the International Association of Athletics Federations. Prior to predicting the magnitude of results change dynamics in the time to follow, the adjustment of trend function to empirical data was tested using the coefficients of convergence. Phase ll of the investigation consisted of the construction of predictive models. The greatest decreases in result indexes were noted in 2000 (9.4%, 2005-2006 (8.7% and Z009 (7.4%. The trend increase was only noted in the years 2006-2008. In general, until 1998 the mean result improved by 54.6% (100% - results of 1946 whereas from 1999 through 2011 the result only increased by 1.3%. Based on data and results variability analysis it might be presumed that, in the nearest future (2015-2018, results variability will increase by approximately 9.7%. Percent improvement of javelin throw distance calculated on the basis of the 1999 raw input data is 1.4% (end of 2014.

  10. 3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss

    Science.gov (United States)

    Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.

    2012-12-01

    3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the

  11. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  12. Application of the IPCC model to a Brazilian landfill: First results

    International Nuclear Information System (INIS)

    Penteado, Roger; Cavalli, Massimo; Magnano, Enrico; Chiampo, Fulvia

    2012-01-01

    The Intergovernmental Panel on Climate Change gave a methodology to estimate the methane emissions from Municipal Solid Wastes landfills, based on a First Order Decay (FOD) model that assumes biodegradation kinetics depending on the type of wastes. This model can be used to estimate both the National greenhouse gas emissions in the industrialized countries as well as the reductions of these emissions in the developing ones when the Clean Development Mechanism, as defined by the Kyoto Protocol, is implemented. In this paper, the FOD model has been use to evaluate the biogas flow rates emitted by a Brazilian landfill and the results have been compared to the extracted ones: some first results can be useful to evidence the weight of key parameters and do a correct use of the model. - Highlights: ► Landfill biogas is greenhouse gas and fuel at the same time. ► In developing countries its collection can implement Kyoto Protocol mechanisms. ► Biogas collection and exploiting become part of energy policy. ► Project economical balance is based on reliable estimates of generated quantities.

  13. An analytical model for nanoparticles concentration resulting from infusion into poroelastic brain tissue.

    Science.gov (United States)

    Pizzichelli, G; Di Michele, F; Sinibaldi, E

    2016-02-01

    We consider the infusion of a diluted suspension of nanoparticles (NPs) into poroelastic brain tissue, in view of relevant biomedical applications such as intratumoral thermotherapy. Indeed, the high impact of the related pathologies motivates the development of advanced therapeutic approaches, whose design also benefits from theoretical models. This study provides an analytical expression for the time-dependent NPs concentration during the infusion into poroelastic brain tissue, which also accounts for particle binding onto cells (by recalling relevant results from the colloid filtration theory). Our model is computationally inexpensive and, compared to fully numerical approaches, permits to explicitly elucidate the role of the involved physical aspects (tissue poroelasticity, infusion parameters, NPs physico-chemical properties, NP-tissue interactions underlying binding). We also present illustrative results based on parameters taken from the literature, by considering clinically relevant ranges for the infusion parameters. Moreover, we thoroughly assess the model working assumptions besides discussing its limitations. While not laying any claims of generality, our model can be used to support the development of more ambitious numerical approaches, towards the preliminary design of novel therapies based on NPs infusion into brain tissue. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    . Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

  15. A Multiagent Based Model for Tactical Planning

    Science.gov (United States)

    2002-10-01

    Pub. Co. 1985. [10] Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such

  16. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studie...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....

  17. Quality Model Based on Cots Quality Attributes

    OpenAIRE

    Jawad Alkhateeb; Khaled Musa

    2013-01-01

    The quality of software is essential to corporations in making their commercial software. Good or poorquality to software plays an important role to some systems such as embedded systems, real-time systems,and control systems that play an important aspect in human life. Software products or commercial off theshelf software are usually programmed based on a software quality model. In the software engineeringfield, each quality model contains a set of attributes or characteristics that drives i...

  18. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  19. Power-Based Setpoint Control : Experimental Results on a Planar Manipulator

    NARCIS (Netherlands)

    Dirksz, D. A.; Scherpen, J. M. A.

    In the last years the power-based modeling framework, developed in the sixties to model nonlinear electrical RLC networks, has been extended for modeling and control of a larger class of physical systems. In this brief we apply power-based integral control to a planar manipulator experimental setup.

  20. CEAI: CCM-based email authorship identification model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2013-11-01

    Full Text Available In this paper we present a model for email authorship identification (EAI by employing a Cluster-based Classification (CCM technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature set to include some more interesting and effective features for email authorship identification (e.g., the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell. We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM-based models, as well as the models proposed by Iqbal et al. (2010, 2013 [1,2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors’ constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1,2].

  1. Image-Based Multiresolution Implicit Object Modeling

    Directory of Open Access Journals (Sweden)

    Sarti Augusto

    2002-01-01

    Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.

  2. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  3. Model Based Control of Reefer Container Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær

    This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together with the Da......This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together...

  4. Energetics based spike generation of a single neuron: simulation results and analysis

    Directory of Open Access Journals (Sweden)

    Nagarajan eVenkateswaran

    2012-02-01

    Full Text Available Existing current based models that capture spike activity, though useful in studying information processing capabilities of neurons, fail to throw light on their internal functioning. It is imperative to develop a model that captures the spike train of a neuron as a function of its intra cellular parameters for non-invasive diagnosis of diseased neurons. This is the first ever article to present such an integrated model that quantifies the inter-dependency between spike activity and intra cellular energetics. The generated spike trains from our integrated model will throw greater light on the intra-cellular energetics than existing current models. Now, an abnormality in the spike of a diseased neuron can be linked and hence effectively analyzed at the energetics level. The spectral analysis of the generated spike trains in a time-frequency domain will help identify abnormalities in the internals of a neuron. As a case study, the parameters of our model are tuned for Alzheimer disease and its resultant spike trains are studied and presented.

  5. Case-Based Reasoning for Human Behavior Modeling

    Science.gov (United States)

    2006-02-16

    edition may be used. Case Based Reasoning for Human Behavior Modeling CDRL A002 for Contract N00014-03-C-0178 February 16, 2006 Document...maintaining a useful repository demand that reuse be supported for human behavior modeling even if other model construction aids are also available...et al. (2001). Results of the Common Human Behavior Representation And Interchange System (CHRIS) Workshop. Fall 2001 Simulation Interoperability

  6. EPR-based material modelling of soils considering volume changes

    Science.gov (United States)

    Faramarzi, Asaad; Javadi, Akbar A.; Alani, Amir M.

    2012-11-01

    In this paper an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR), taking into account its volumetric behaviour. EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial test are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well known conventional material models. In particular, the capability of the developed EPR models in predicting volume change behaviour of soils is illustrated. It is also shown that the developed EPR-based material models can be incorporated in finite element (FE) analysis. Two geotechnical examples are presented to verify the developed EPR-based FE model (EPR-FEM). The results of the EPR-FEM are compared with those of a standard FEM where conventional constitutive models are used to describe the material behaviour. The results show that EPR-FEM can be successfully employed to analyse geotechnical engineering problems. The advantages of the proposed EPR models are highlighted.

  7. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  8. Results from Source-Based and Detector-Based Calibrations of a CLARREO Calibration Demonstration System

    Science.gov (United States)

    Angal, Amit; Mccorkel, Joel; Thome, Kurt

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. Modelling of hydrophone based on a DFB fiber laser

    DEFF Research Database (Denmark)

    Hansen, Lars Voxen; Kullander, F.

    2004-01-01

    This paper deals with modeling of a DFB fiber laser based hydrophone. Both an analytical and a finite element model are developed to describe the acoustic response of the hydrophone. Results from the finite element model are compared to the analytical results. The small dimensions (length 3-6 cm......) and low frequency noise properties of DFB fiber lasers make them useful as hydrophones. Generally, for underwater surveillance applications or similar tasks the acoustic pressure sensitivity of the fiber laser needs to be enhanced by more than two orders of magnitude. Our models predict that this can...

  10. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  11. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  12. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  13. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  14. Implementing a continuum of care model for older people - results from a Swedish case study

    Directory of Open Access Journals (Sweden)

    Anna Duner

    2011-11-01

    Full Text Available Introduction: There is a need for integrated care and smooth collaboration between care-providing organisations and professions to create a continuum of care for frail older people. However, collaboration between organisations and professions is often problematic. The aim of this study was to examine the process of implementing a new continuum of care model in a complex organisational context, and illuminate some of the challenges involved. The introduced model strived to connect three organisations responsible for delivering health and social care to older people: the regional hospital, primary health care and municipal eldercare.Methods: The actions of the actors involved in the process of implementing the model were understood to be shaped by the actors' understanding, commitment and ability. This article is based on 44 qualitative interviews performed on four occasions with 26 key actors at three organisational levels within these three organisations.Results and conclusions: The results point to the importance of paying regard to the different cultures of the organisations when implementing a new model. The role of upper management emerged as very important. Furthermore, to be accepted, the model has to be experienced as effectively dealing with real problems in the everyday practice of the actors in the organisations, from the bottom to the top.

  15. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  16. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  17. Solar Deployment System (SolarDS) Model: Documentation and Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Denholm, P.; Drury, E.; Margolis, R.

    2009-09-01

    The Solar Deployment System (SolarDS) model is a bottom-up, market penetration model that simulates the potential adoption of photovoltaics (PV) on residential and commercial rooftops in the continental United States through 2030. NREL developed SolarDS to examine the market competitiveness of PV based on regional solar resources, capital costs, electricity prices, utility rate structures, and federal and local incentives. The model uses the projected financial performance of PV systems to simulate PV adoption for building types and regions then aggregates adoption to state and national levels. The main components of SolarDS include a PV performance simulator, a PV annual revenue calculator, a PV financial performance calculator, a PV market share calculator, and a regional aggregator. The model simulates a variety of installed PV capacity for a range of user-specified input parameters. PV market penetration levels from 15 to 193 GW by 2030 were simulated in preliminary model runs. SolarDS results are primarily driven by three model assumptions: (1) future PV cost reductions, (2) the maximum PV market share assumed for systems with given financial performance, and (3) PV financing parameters and policy-driven assumptions, such as the possible future cost of carbon emissions.

  18. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  19. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of

  20. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  1. Modelling Web-Based Instructional Systems

    NARCIS (Netherlands)

    Retalis, Symeon; Avgeriou, Paris

    2002-01-01

    The size and complexity of modern instructional systems, which are based on the World Wide Web, bring about great intricacy in their crafting, as there is not enough knowledge or experience in this field. This imposes the use of new instructional design models in order to achieve risk-mitigation,

  2. Ligand based pharmacophore modelling of anticancer histone ...

    African Journals Online (AJOL)

    like myocardium damage and bone marrow depression even leading to cell death have been observed in the treatment of caner cells using HDAC inhibitors. The discovery and development of type-specific HDAC inhibitors is of both research and clinical interests. Ligand based pharmacophore modelling is playing a key ...

  3. Model-based auditing using REA

    NARCIS (Netherlands)

    Weigand, H.; Elsas, P.

    2012-01-01

    The recent financial crisis has renewed interest in the value of the owner-ordered auditing tradition that starts from society's long-term interest rather than management interest. This tradition uses a model-based auditing approach in which control requirements are derived in a principled way. A

  4. Agent Based Modeling as an Educational Tool

    Science.gov (United States)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  5. Model-Based Development of Control Systems for Forestry Cranes

    Directory of Open Access Journals (Sweden)

    Pedro La Hera

    2015-01-01

    Full Text Available Model-based methods are used in industry for prototyping concepts based on mathematical models. With our forest industry partners, we have established a model-based workflow for rapid development of motion control systems for forestry cranes. Applying this working method, we can verify control algorithms, both theoretically and practically. This paper is an example of this workflow and presents four topics related to the application of nonlinear control theory. The first topic presents the system of differential equations describing the motion dynamics. The second topic presents nonlinear control laws formulated according to sliding mode control theory. The third topic presents a procedure for model calibration and control tuning that are a prerequisite to realize experimental tests. The fourth topic presents the results of tests performed on an experimental crane specifically equipped for these tasks. Results of these studies show the advantages and disadvantages of these control algorithms, and they highlight their performance in terms of robustness and smoothness.

  6. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  7. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  8. Multiagent-Based Model For ESCM

    Directory of Open Access Journals (Sweden)

    Delia MARINCAS

    2011-01-01

    Full Text Available Web based applications for Supply Chain Management (SCM are now a necessity for every company in order to meet the increasing customer demands, to face the global competition and to make profit. Multiagent-based approach is appropriate for eSCM because it shows many of the characteristics a SCM system should have. For this reason, we have proposed a multiagent-based eSCM model which configures a virtual SC, automates the SC activities: selling, purchasing, manufacturing, planning, inventory, etc. This model will allow a better coordination of the supply chain network and will increase the effectiveness of Web and intel-ligent technologies employed in eSCM software.

  9. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida

    2014-09-01

    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  10. Model-Based Knowing: How Do Students Ground Their Understanding About Climate Systems in Agent-Based Computer Models?

    Science.gov (United States)

    Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.

    2017-12-01

    This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.

  11. Cost Benefit Analysis Modeling Tool for Electric vs. ICE Airport Ground Support Equipment – Development and Results

    Energy Technology Data Exchange (ETDEWEB)

    James Francfort; Kevin Morrow; Dimitri Hochard

    2007-02-01

    This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.

  12. Mechanics model for actin-based motility.

    Science.gov (United States)

    Lin, Yuan

    2009-02-01

    We present here a mechanics model for the force generation by actin polymerization. The possible adhesions between the actin filaments and the load surface, as well as the nucleation and capping of filament tips, are included in this model on top of the well-known elastic Brownian ratchet formulation. A closed form solution is provided from which the force-velocity relationship, summarizing the mechanics of polymerization, can be drawn. Model predictions on the velocity of moving beads driven by actin polymerization are consistent with experiment observations. This model also seems capable of explaining the enhanced actin-based motility of Listeria monocytogenes and beads by the presence of Vasodilator-stimulated phosphoprotein, as observed in recent experiments.

  13. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  14. Evaluation of the WAMME model surface fluxes using results from the AMMA land-surface model intercomparison project

    Energy Technology Data Exchange (ETDEWEB)

    Boone, Aaron Anthony [GAME-CNRM, Meteo-France, Toulouse (France); Poccard-Leclercq, Isabelle [Universite de Nantes, LETG-Geolittomer, Nantes (France); Xue, Yongkang; Feng, Jinming [University of California at Los Angeles, Los Angeles, CA (United States); Rosnay, Patricia de [European Centre for Medium Range Weather Forecasting, Reading (United Kingdom)

    2010-07-15

    The West African monsoon (WAM) circulation and intensity have been shown to be influenced by the land surface in numerous numerical studies using regional scale and global scale atmospheric climate models (RCMs and GCMs, respectively) over the last several decades. The atmosphere-land surface interactions are modulated by the magnitude of the north-south gradient of the low level moist static energy, which is highly correlated with the steep latitudinal gradients of the vegetation characteristics and coverage, land use, and soil properties over this zone. The African Multidisciplinary Monsoon Analysis (AMMA) has organised comprehensive activities in data collection and modelling to further investigate the significance land-atmosphere feedbacks. Surface energy fluxes simulated by an ensemble of land surface models from AMMA Land-surface Model Intercomparison Project (ALMIP) have been used as a proxy for the best estimate of the ''real world'' values in order to evaluate GCM and RCM simulations under the auspices of the West African Monsoon Modelling Experiment (WAMME) project, since such large-scale observations do not exist. The ALMIP models have been forced in off-line mode using forcing based on a mixture of satellite, observational, and numerical weather prediction data. The ALMIP models were found to agree well over the region where land-atmosphere coupling is deemed to be most important (notably the Sahel), with a high signal to noise ratio (generally from 0.7 to 0.9) in the ensemble and a inter-model coefficient of variation between 5 and 15%. Most of the WAMME models simulated spatially averaged net radiation values over West Africa which were consistent with the ALMIP estimates, however, the partitioning of this energy between sensible and latent heat fluxes was significantly different: WAMME models tended to simulate larger (by nearly a factor of two) monthly latent heat fluxes than ALMIP. This results due to a positive precipitation

  15. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  16. Assessment of soil-structure interaction practice based on synthesized results from Lotung experiment - earthquake response

    International Nuclear Information System (INIS)

    Hadjian, A.H.; Tseng, W.S.; Tang, Y.K.; Tang, H.T.; Stepp, J.C.

    1991-01-01

    On the assumption that the foundation can be appropriately modeled, it would be difficult to distinguish between the computational capabilities of the SASSI, CLASSI and SUPERALUSH/CLASSI methods of SSI analysis. Given the appropriate model, all three methodologies would produce very similar valid results. However, both CLASSI (Bechtel) and Soil-Spring methods should be used cautiously within their known limitations. The use of FLUSH should be limited to essentially 2D problems. More than the computational methods, the differences in the seismic response results obtained are due to the modeling of the soil-structure system and the characterization of the input motions. A number of insights have been obtained with respect to the validity of SSI analysis methodologies for earthquake response. Among these are the following: vertical wave propagation assumption in performing SSI is adequate to describe the wave field; equivalent linear analysis of soil response for SSI analysis, such as performed by the SHAKE code, provides acceptable results; a significant but non-permanent degradation of soil modulus occurs during earthquakes; the development of soil stiffness degradation and damping curves as a function of strain, based on geophysical and laboratory tests, requires improvement to reduce variability and uncertainty; backfill stiffness plays an important role in determining impedance functions and possibly input motions; scattering of ground motion due to embedment is an important element in performing SSI analysis. (author)

  17. Preliminary Results from Electric Arc Furnace Off-Gas Enthalpy Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nimbalkar, Sachin U [ORNL; Thekdi, Arvind [E3M Inc; Keiser, James R [ORNL; Storey, John Morse [ORNL

    2015-01-01

    This article describes electric arc furnace (EAF) off-gas enthalpy models developed at Oak Ridge National Laboratory (ORNL) to calculate overall heat availability (sensible and chemical enthalpy) and recoverable heat values (steam or power generation potential) for existing EAF operations and to test ORNL s new EAF waste heat recovery (WHR) concepts. ORNL s new EAF WHR concepts are: Regenerative Drop-out Box System and Fluidized Bed System. The two EAF off-gas enthalpy models described in this paper are: 1.Overall Waste Heat Recovery Model that calculates total heat availability in off-gases of existing EAF operations 2.Regenerative Drop-out Box System Model in which hot EAF off-gases alternately pass through one of two refractory heat sinks that store heat and then transfer it to another gaseous medium These models calculate the sensible and chemical enthalpy of EAF off-gases based on the off-gas chemical composition, temperature, and mass flow rate during tap to tap time, and variations in those parameters in terms of actual values over time. The models provide heat transfer analysis for the aforementioned concepts to confirm the overall system and major component sizing (preliminary) to assess the practicality of the systems. Real-time EAF off-gas composition (e.g., CO, CO2, H2, and H2O), volume flow, and temperature data from one EAF operation was used to test the validity and accuracy of the modeling work. The EAF off-gas data was used to calculate the sensible and chemical enthalpy of the EAF off-gases to generate steam and power. The article provides detailed results from the modeling work that are important to the success of ORNL s EAF WHR project. The EAF WHR project aims to develop and test new concepts and materials that allow cost-effective recovery of sensible and chemical heat from high-temperature gases discharged from EAFs.

  18. 3D Temperature Distribution Model Based on Thermal Infrared Image

    Directory of Open Access Journals (Sweden)

    Tong Jia

    2017-01-01

    Full Text Available This paper aims to study the construction of 3D temperature distribution reconstruction system based on binocular vision technology. Initially, a traditional calibration method cannot be directly used, because the thermal infrared camera is only sensitive to temperature. Therefore, the thermal infrared camera is calibrated separately. Belief propagation algorithm is also investigated and its smooth model is improved in terms of stereo matching to optimize mismatching rate. Finally, the 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model and has strong robustness.

  19. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...... and on the iCub simulator, validating the robustness of the proposed control method. The first set of experiments focused on the controller response to a set of disturbance frequencies along the vertical plane. The second shows the performances of the system under three-dimensional disturbances. The last set...

  20. Targeted screening of individuals at high risk for pancreatic cancer: results of a simulation model.

    Science.gov (United States)

    Pandharipande, Pari V; Heberle, Curtis; Dowling, Emily C; Kong, Chung Yin; Tramontano, Angela; Perzan, Katherine E; Brugge, William; Hur, Chin

    2015-04-01

    To identify when, from the standpoint of relative risk, magnetic resonance (MR) imaging-based screening may be effective in patients with a known or suspected genetic predisposition to pancreatic cancer. The authors developed a Markov model of pancreatic ductal adenocarcinoma (PDAC). The model was calibrated to National Cancer Institute Surveillance, Epidemiology, and End Results registry data and informed by the literature. A hypothetical screening strategy was evaluated in which all population individuals underwent one-time MR imaging screening at age 50 years. Screening outcomes for individuals with an average risk for PDAC ("base case") were compared with those for individuals at an increased risk to assess for differential benefits in populations with a known or suspected genetic predisposition. Effects of varying key inputs, including MR imaging performance, surgical mortality, and screening age, were evaluated with a sensitivity analysis. RESULTS In the base case, screening resulted in a small number of cancer deaths averted (39 of 100 000 men, 38 of 100 000 women) and a net decrease in life expectancy (-3 days for men, -4 days for women), which was driven by unnecessary pancreatic surgeries associated with false-positive results. Life expectancy gains were achieved if an individual's risk for PDAC exceeded 2.4 (men) or 2.7 (women) times that of the general population. When relative risk increased further, for example to 30 times that of the general population, averted cancer deaths and life expectancy gains increased substantially (1219 of 100 000 men, life expectancy gain: 65 days; 1204 of 100 000 women, life expectancy gain: 71 days). In addition, results were sensitive to MR imaging specificity and the surgical mortality rate. Although PDAC screening with MR imaging for the entire population is not effective, individuals with even modestly increased risk may benefit. © RSNA, 2014 Online supplemental material is available for this article.

  1. Comparison of TS and ANN Models with the Results of Emission Scenarios in Rainfall Prediction

    Directory of Open Access Journals (Sweden)

    S. Babaei Hessar

    2016-02-01

    the best performance. Multiple Layer Perceptron with a 10 neurons in hidden layer and the output layer consists of five neurons had the lowest MSE and the highest correlation coefficient in modeling the values of annual precipitation. So MLP was determined as the best structure of neural network for rainfall prediction. According to results, precipitation predicted by the ANN model was very close to the results of A2 and B1 scenario, whereas TS has a significant difference with these scenarios. Average rainfall predicted by two A2 and B1 scenarios in Urmia station has more difference than other stations. Based on the B1 scenario, precipitation will increase 11 percent over the next two decades. It will decrease 10.7 percent according to A2 emissions scenario. According to ANN models and two A2 and B1 scenarios, the rates of rainfall will increase in Tabriz and Khoy stations. However, according to TS model, rainfall will decline 5.94 and 3.63 percent for these two stations, respectively. Conclusion: Global warming and climate change should have adverse effects on groundwater and surface water resources. Different models are used for simulating of thes effects. But, conformity of these models with the results of climate scenarios is an issue that has not been addressed. In the present research coincidence of TS model, ANN model and climate change scenarios was investigated. Results show under emissions scenarios, during the next two decades in Tabriz and Khoy stations, precipitation will increase. In Urmia station B1 and A2 scenario percent increase by 11 percent and 10.5 percent decline predicted, respectively. The results of Roshan and et al (4 and Golmohammad and et al, (7 investigations show increasing trend in the rainfall rate and confirming the results of this study According to results, the performance of ANN model is better than TS model for rainfall prediction and its result is similar to climate change scenarios. Similar results have been reported by Wang et

  2. The Internationalisation Cube. A Tentative Model for the Study of Organisational Designs and the Results of Internationalisation in Higher Education.

    Science.gov (United States)

    van Dijk, Hans; Meijer, Kees

    1997-01-01

    Based on results of a Dutch study concerning internationalization of higher education, a model for analyzing the internal institutional process of decision making for, organization of, and implementation of international activities is described. The model positions institutions according to three dimensions: policy (importance attached to…

  3. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  4. Which Dimensions of Patient-Centeredness Matter? - Results of a Web-Based Expert Delphi Survey.

    Directory of Open Access Journals (Sweden)

    Jördis M Zill

    Full Text Available Present models and definitions of patient-centeredness revealed a lack of conceptual clarity. Based on a prior systematic literature review, we developed an integrative model with 15 dimensions of patient-centeredness. The aims of this study were to 1 validate, and 2 prioritize these dimensions.A two-round web-based Delphi study was conducted. 297 international experts were invited to participate. In round one they were asked to 1 give an individual rating on a nine-point-scale on relevance and clarity of the dimensions, 2 add missing dimensions, and 3 prioritize the dimensions. In round two, experts received feedback about the results of round one and were asked to reflect and re-rate their own results. The cut-off for the validation of a dimension was a median < 7 on one of the criteria.105 experts participated in round one and 71 in round two. In round one, one new dimension was suggested and included for discussion in round two. In round two, this dimension did not reach sufficient ratings to be included in the model. Eleven dimensions reached a median ≥ 7 on both criteria (relevance and clarity. Four dimensions had a median < 7 on one or both criteria. The five dimensions rated as most important were: patient as a unique person, patient involvement in care, patient information, clinician-patient communication and patient empowerment.11 out of the 15 dimensions have been validated through experts' ratings. Further research on the four dimensions that received insufficient ratings is recommended. The priority order of the dimensions can help researchers and clinicians to focus on the most important dimensions of patient-centeredness. Overall, the model provides a useful framework that can be used in the development of measures, interventions, and medical education curricula, as well as the adoption of a new perspective in health policy.

  5. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  6. Model transformation based information system modernization

    Directory of Open Access Journals (Sweden)

    Olegas Vasilecas

    2013-03-01

    Full Text Available Information systems begin to date increasingly faster because of rapidly changing business environment. Usually, small changes are not sufficient to adapt complex legacy information systems to changing business needs. A new functionality should be installed with the requirement of putting business data in the smallest possible risk. Information systems modernization problems are beeing analyzed in this paper and a method for information system modernization is proposed. It involves programming code transformation into abstract syntax tree metamodel (ASTM and model based transformation from ASTM into knowledge discovery model (KDM. The method is validated on example for SQL language.

  7. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  8. Image-Based Visual Servoing for Manipulation Via Predictive Control – A Survey of Some Results

    Directory of Open Access Journals (Sweden)

    Corneliu Lazăr

    2016-09-01

    Full Text Available In this paper, a review of predictive control algorithms developed by the authors for visual servoing of robots in manipulation applications is presented. Using these algorithms, a control predictive framework was created for image-based visual servoing (IBVS systems. Firstly, considering the point features, in the year 2008 we introduced an internal model predictor based on the interaction matrix. Secondly, distinctly from the set-point trajectory, we introduced in 2011 the reference trajectory using the concept from predictive control. Finally, minimizing a sum of squares of predicted errors, the optimal input trajectory was obtained. The new concept of predictive control for IBVS systems was employed to develop a cascade structure for motion control of robot arms. Simulation results obtained with a simulator for predictive IBVS systems are also presented.

  9. Complexity and agent-based modelling in urban research

    DEFF Research Database (Denmark)

    Fertner, Christian

    Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...... influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing...... is (still) seen very critical regarding its usefulness and explanatory power....

  10. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation......We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...

  11. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

  12. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  13. Numerical modeling of parallel-plate based AMR

    DEFF Research Database (Denmark)

    In this work we present an improved 2-dimensional numerical model of a parallel-plate based AMR. The model includes heat transfer in fluid and magnetocaloric domains respectively. The domains are coupled via inner thermal boundaries. The MCE is modeled either as an instantaneous change between high...... comparison with experiment. This is used as a firm basis for predicting and optimizing performance of a large variety of regenerator configurations in order to study and learn the trends, tendencies and even absolute values of temperature span and cooling powers for the optimal (and buildable) designs...... in the direction not resolved through a realistic description of the thermal resistance between localized points in the bed and the ambient. The results show that the additions to the model place numerical modeling of AMR very close to the corresponding experimental results. Thus, the model is verified by direct...

  14. Development of a treatment planning system for BNCT based on positron emission tomography data: preliminary results

    Science.gov (United States)

    Cerullo, N.; Daquino, G. G.; Muzi, L.; Esposito, J.

    2004-01-01

    Present standard treatment planning (TP) for glioblastoma multiforme (GBM - a kind of brain tumor), used in all boron neutron capture therapy (BNCT) trials, requires the construction (based on CT and/or MRI images) of a 3D model of the patient head, in which several regions, corresponding to different anatomical structures, are identified. The model is then employed by a computer code to simulate radiation transport in human tissues. The assumption is always made that considering a single value of boron concentration for each specific region will not lead to significant errors in dose computation. The concentration values are estimated "indirectly", on the basis of previous experience and blood sample analysis. This paper describes an original approach, with the introduction of data on the in vivo boron distribution, acquired by a positron emission tomography (PET) scan after labeling the BPA (borono-phenylalanine) with the positron emitter 18F. The feasibility of this approach was first tested with good results using the code CARONTE. Now a complete TPS is under development. The main features of the first version of this code are described and the results of a preliminary study are presented. Significant differences in dose computation arise when the two different approaches ("standard" and "PET-based") are applied to the TP of the same GBM case.

  15. A model of synthesis based on functional reasoning

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp; Zavbi, R.

    2002-01-01

    In this paper we propose a model of how to carry out functional reasoning. The model is based on the domain theory, and it links the stepwise determination of the artefact´s characteristics during the design process to different ways of carrying out functional reasoning found in the literature....... The model proposes of a set of the mental objects and a number of ways to carry out functional reasoning available to the engineering designer. The result of the research presented in this paper is the building of a hypothesis "in the form of a model" with explanatory power....

  16. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  17. On a Three Dimensional Vision Based Collision Avoidance Model

    Science.gov (United States)

    Parzani, Céline; Filbet, Francis

    2017-08-01

    This paper presents a three dimensional collision avoidance approach for aerial vehicles inspired by coordinated behaviors in biological groups. The proposed strategy aims to enable a group of vehicles to converge to a common destination point avoiding collisions with each other and with moving obstacles in their environment. The interaction rules lead the agents to adapt their velocity vectors through a modification of the relative bearing angle and the relative elevation. Moreover the model satisfies the limited field of view constraints resulting from individual perception sensitivity. From the proposed individual based model, a mean-field kinetic model is derived. Simulations are performed to show the effectiveness of the proposed model.

  18. Mesoscopic models for DNA stretching under force: New results and comparison with experiments.

    Science.gov (United States)

    Manghi, Manoel; Destainville, Nicolas; Palmeri, John

    2012-10-01

    Single-molecule experiments on double-stranded B-DNA stretching have revealed one or two structural transitions, when increasing the external force. They are characterized by a sudden increase of DNA contour length and a decrease of the bending rigidity. The nature and the critical forces of these transitions depend on DNA base sequence, loading rate, salt conditions and temperature. It has been proposed that the first transition, at forces of 60-80 pN, is a transition from B to S-DNA, viewed as a stretched duplex DNA, while the second one, at stronger forces, is a strand peeling resulting in single-stranded DNAs (ssDNA), similar to thermal denaturation. But due to experimental conditions these two transitions can overlap, for instance for poly(dA-dT). In an attempt to propose a coherent picture compatible with this variety of experimental observations, we derive an analytical formula using a coupled discrete worm-like chain-Ising model. Our model takes into account bending rigidity, discreteness of the chain, linear and non-linear (for ssDNA) bond stretching. In the limit of zero force, this model simplifies into a coupled model already developed by us for studying thermal DNA melting, establishing a connection with previous fitting parameter values for denaturation profiles. Our results are summarized as follows: i) ssDNA is fitted, using an analytical formula, over a nano-Newton range with only three free parameters, the contour length, the bending modulus and the monomer size; ii) a surprisingly good fit on this force range is possible only by choosing a monomer size of 0.2 nm, almost 4 times smaller than the ssDNA nucleobase length; iii) mesoscopic models are not able to fit B to ssDNA (or S to ss) transitions; iv) an analytical formula for fitting B to S transitions is derived in the strong force approximation and for long DNAs, which is in excellent agreement with exact transfer matrix calculations; v) this formula fits perfectly well poly(dG-dC) and

  19. Knowledge-based geometric modeling in construction

    DEFF Research Database (Denmark)

    Bonev, Martin; Hvam, Lars

    2012-01-01

    A wider application of IT-based solutions, such as configuration systems and the implementation of modeling standards, has facilitated the trend to produce mass customized products to support inter alia the specification process of the increasing product variety. However, not all industries have...... realized the full potential of using product and process modelling tools as well as the implementation of configuration systems to support their business processes. Especially in the building industry, where Engineer-to-Order (ETO) manufacturers provide complex custom tailored products, up to now, often...... a considerably high amount of their recourses is required for designing and specifying the majority of their product assortment. As design decisions are hereby based on knowledge and experience about behaviour and applicability of construction techniques and materials for a predefined design situation, smart...

  20. Business Models for NFC based mobile payments

    Directory of Open Access Journals (Sweden)

    Johannes Sang Un Chae

    2015-01-01

    Full Text Available Purpose: The purpose of the paper is to develop a business model framework for NFC based mobile payment solutions consisting of four mutually interdepended components: the value service, value network, value architecture, and value finance. Design: Using a comparative case study method, the paper investigates Google Wallet and ISIS Mobile Wallet and their underlying business models. Findings: Google Wallet and ISIS Mobile Wallet are focusing on providing an enhanced customer experience with their mobile wallet through a multifaceted value proposition. The delivery of its offering requires cooperation from multiple stakeholders and the creation of an ecosystem. Furthermore, they focus on the scalability of their value propositions. Originality / value: The paper offers an applicable business model framework that allows practitioners and academics to study current and future mobile payment approaches.

  1. Business Models for NFC Based Mobile Payments

    DEFF Research Database (Denmark)

    Chae, Johannes Sang-Un; Hedman, Jonas

    2015-01-01

    from multiple stakeholders and the creation of an ecosystem. Furthermore, they focus on the scalability of their value propositions. Originality / value: The paper offers an applicable business model framework that allows practitioners and academics to study current and future mobile payment approaches.......Purpose: The purpose of the paper is to develop a business model framework for NFC based mobile payment solutions consisting of four mutually interdepended components: the value service, value network, value architecture, and value finance. Design: Using a comparative case study method, the paper...... investigates Google Wallet and ISIS Mobile Wallet and their underlying business models. Findings: Google Wallet and ISIS Mobile Wallet are focusing on providing an enhanced customer experience with their mobile wallet through a multifaceted value proposition. The delivery of its offering requires cooperation...

  2. Optimal portfolio model based on WVAR

    OpenAIRE

    Hao, Tianyu

    2012-01-01

    This article is focused on using a new measurement of risk-- Weighted Value at Risk to develop a new method of constructing initiate from the TVAR solving problem, based on MATLAB software, using the historical simulation method (avoiding income distribution will be assumed to be normal), the results of previous studies also based on, study the U.S. Nasdaq composite index, combining the Simpson formula for the solution of TVAR and its deeply study; then, through the representation of WVAR for...

  3. Model-Based Analysis of Hand Radiographs

    Science.gov (United States)

    Levitt, Tod S.; Hedgcock, Marcus W.

    1989-05-01

    As a step toward computer assisted imagery interpretation, we are developing algorithms for computed radiography that allow a computer to recognize specific bones and joints, and to identify variations from normal in size, shape and density. In this paper we report on our approach to model-based computer recognition of hands in radiographs. First, image processing hypotheses of the imaged bones. Multiple hypotheses of the size and orientation of the imaged anatomy are matched against stored 3D models fof the relevant bones, obtained from statistically valid populations studies. Probabilities of the hypotheses are accrued using Bayesian inference techniques whose evaluation is guided by the structure of the hand model and the observed image-derived evidence such as anti-parallel edges, local contrast, etc. High probability matches between the hand model and the image data can cue additional image processing-based ssearch for bones, joints and soft-tissue to confirm hypotheses of the location of the imaged hand. At this point multipule disease detection techniques, automated bone age identification, etc. can be employed.

  4. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  5. Numerical analysis of modeling based on improved Elman neural network.

    Science.gov (United States)

    Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance.

  6. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron data set, while 89.5% accuracy has been achieved on authors' constructed real email data set. The results on Enron data set have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1...

  7. Calculating gait kinematics using MR-based kinematic models.

    Science.gov (United States)

    Scheys, Lennart; Desloovere, Kaat; Spaepen, Arthur; Suetens, Paul; Jonkers, Ilse

    2011-02-01

    Rescaling generic models is the most frequently applied approach in generating biomechanical models for inverse kinematics. Nevertheless it is well known that this procedure introduces errors in calculated gait kinematics due to: (1) errors associated with palpation of anatomical landmarks, (2) inaccuracies in the definition of joint coordinate systems. Based on magnetic resonance (MR) images, more accurate, subject-specific kinematic models can be built that are significantly less sensitive to both error types. We studied the difference between the two modelling techniques by quantifying differences in calculated hip and knee joint kinematics during gait. In a clinically relevant patient group of 7 pediatric cerebral palsy (CP) subjects with increased femoral anteversion, gait kinematic were calculated using (1) rescaled generic kinematic models and (2) subject-specific MR-based models. In addition, both sets of kinematics were compared to those obtained using the standard clinical data processing workflow. Inverse kinematics, calculated using rescaled generic models or the standard clinical workflow, differed largely compared to kinematics calculated using subject-specific MR-based kinematic models. The kinematic differences were most pronounced in the sagittal and transverse planes (hip and knee flexion, hip rotation). This study shows that MR-based kinematic models improve the reliability of gait kinematics, compared to generic models based on normal subjects. This is the case especially in CP subjects where bony deformations may alter the relative configuration of joint coordinate systems. Whilst high cost impedes the implementation of this modeling technique, our results demonstrate that efforts should be made to improve the level of subject-specific detail in the joint axes determination. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. MSFC Stream Model Preliminary Results: Modeling Recent Leonid and Perseid Encounters

    Science.gov (United States)

    Cooke, William J.; Moser, Danielle E.

    2004-01-01

    The cometary meteoroid ejection model of Jones and Brown (1996b) was used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the last 9 apparitions of 109P/Swift-Tuttle. Using cometary ephemerides generated by the Jet Propulsion Laboratory s (JPL) HORIZONS Solar System Data and Ephemeris Computation Service, two independent ejection schemes were simulated. In the first case, ejection was simulated in 1 hour time steps along the comet s orbit while it was within 2.5 AU of the Sun. In the second case, ejection was simulated to occur at the hour the comet reached perihelion. A 4th order variable step-size Runge-Kutta integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting-Robertson drag, and the gravitational forces of the planets, which were computed using JPL s DE406 planetary ephemerides. An impact parameter was computed for each particle approaching the Earth to create a flux profile, and the results compared to observations of the 1998 and 1999 Leonid showers, and the 1993 and 2004 Perseids.

  9. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  10. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  11. Simulation-Based Internal Models for Safer Robots

    Directory of Open Access Journals (Sweden)

    Christian Blum

    2018-01-01

    Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

  12. Modeling and Results for Creating Oblique Fields in a Magnetic Flux Leakage Survey Tool

    Science.gov (United States)

    Simek, James C.

    2010-02-01

    Integrity management programs designed to maintain safe pipeline systems quite often will use survey results from In line inspection (ILI) tools in addition to data from other sources. Commonly referred to a "smart pigs," one of the most widely used types are those based upon the magnetic flux leakage technique, typically used to detect and quantify metal loss zones. The majority of pipelines surveyed to date have used tools with the magnetic field direction axially aligned with the length of the pipeline. In order to enable detection and quantification of extremely narrow metal loss features or certain types of weld zone anomalies, tools employing magnetic circuits directing the magnetic fields around the pipe circumference have been designed and are use in segments where these feature categories are a primary concern. Modeling and laboratory test data of metal loss features will be used to demonstrate the response of extremely narrow metal loss zones as the features are rotated relative to the induced field direction. Based upon these results, the basis for developing a magnetizer capable of creating fields oblique to either pipeline axis will be presented along with the magnetic field profile models of several configurations.

  13. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  14. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    Science.gov (United States)

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  15. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  16. New borehole-derived results on temperatures at the base of the Fennoscandian ice sheet

    Science.gov (United States)

    Rath, Volker; Vogt, Christian; Mottaghy, Darius; Kukkonen, Ilmo; Tarasov, Lev

    2014-05-01

    During the last few years, a data base of deep boreholes (>1000 m )in the area of the Fennoscandian ice sheet has been collected, including boreholes from Russia, Poland, Finland, Sweden and Norway. All of these are supposed to have recorded local basal ice conditions during the last glacial cycle. However, at each of these sites we are confronted with particular problems of interpretation. Here, we will concentrate on two very deep boreholes, namely the Outokumpu ICDP borehole (OKU, ≡2500 m) and a set of boreholes of intermediate depth (up to 1300 m) in the immediate meighborhood of the Kola superdeep borehole SG3. In the first case, OKU, we have developed a strategy combining the use of a traditional variational inversion of thye Tikhonov type, with a MCMC approach for the exploration of the associated uncertainty. A wide distribution around the result of the variational approach was chosen, with a time dependent temporal correlation length reflecting the loss of resolution back in time. The results fit very well with region independent results from different proxies, multi-proxy reconstructions, and instrumental data. They also are consistent with surface temperatures derived from recent calibrated ice sheet models. The SAT-GST offset independently derived from shallow borehole observations in the area was a crucial step to obtain theses results. The second case, SG3, has been studied a long time, and no final result was obtained regarding the question whether the observed heat flow density profile is caused by paleoclimate, fluid flow, or both. Earlier studies, as well as forward modelling using the results of the aforementioned ice sheet model indicate that paleoclimate alone can not explain the observations. We tested the model derived from the set of shallow boreholes against the temperature log from the main superdeep SG3, which, in contrast to these, transects the main high-permeability zone. The comparison led to a favorable results, and is also

  17. Flow based vs. demand based energy-water modelling

    Science.gov (United States)

    Rozos, Evangelos; Nikolopoulos, Dionysis; Efstratiadis, Andreas; Koukouvinos, Antonios; Makropoulos, Christos

    2015-04-01

    The water flow in hydro-power generation systems is often used downstream to cover other type of demands like irrigation and water supply. However, the typical case is that the energy demand (operation of hydro-power plant) and the water demand do not coincide. Furthermore, the water inflow into a reservoir is a stochastic process. Things become more complicated if renewable resources (wind-turbines or photovoltaic panels) are included into the system. For this reason, the assessment and optimization of the operation of hydro-power systems are challenging tasks that require computer modelling. This modelling should not only simulate the water budget of the reservoirs and the energy production/consumption (pumped-storage), but should also take into account the constraints imposed by the natural or artificial water network using a flow routing algorithm. HYDRONOMEAS, for example, uses an elegant mathematical approach (digraph) to calculate the flow in a water network based on: the demands (input timeseries), the water availability (simulated) and the capacity of the transmission components (properties of channels, rivers, pipes, etc.). The input timeseries of demand should be estimated by another model and linked to the corresponding network nodes. A model that could be used to estimate these timeseries is UWOT. UWOT is a bottom up urban water cycle model that simulates the generation, aggregation and routing of water demand signals. In this study, we explore the potentials of UWOT in simulating the operation of complex hydrosystems that include energy generation. The evident advantage of this approach is the use of a single model instead of one for estimation of demands and another for the system simulation. An application of UWOT in a large scale system is attempted in mainland Greece in an area extending over 130×170 km². The challenges, the peculiarities and the advantages of this approach are examined and critically discussed.

  18. Model based management of a reservoir system

    Energy Technology Data Exchange (ETDEWEB)

    Scharaw, B.; Westerhoff, T. [Fraunhofer IITB, Ilmenau (Germany). Anwendungszentrum Systemtechnik; Puta, H.; Wernstedt, J. [Technische Univ. Ilmenau (Germany)

    2000-07-01

    The main goals of reservoir management systems consist of prevention against flood water damages, the catchment of raw water and keeping all of the quality parameters within their limits besides controlling the water flows. In consideration of these goals a system model of the complete reservoir system Ohra-Schmalwasser-Tambach Dietharz was developed. This model has been used to develop optimized strategies for minimization of raw water production cost, for maximization of electrical energy production and to cover flood situations, as well. Therefore a proper forecast of the inflow to the reservoir from the catchment areas (especially flooding rivers) and the biological processes in the reservoir is important. The forecast model for the inflow to the reservoir is based on the catchment area model of Lorent and Gevers. It uses area precipitation, water supply from the snow cover, evapotranspiration and soil wetness data to calculate the amount of flow in rivers. The other aim of the project is to ensure the raw water quality using quality models, as well. Then a quality driven raw water supply will be possible. (orig.)

  19. Vertex finding by sparse model-based clustering

    Science.gov (United States)

    Frühwirth, R.; Eckstein, K.; Frühwirth-Schnatter, S.

    2016-10-01

    The application of sparse model-based clustering to the problem of primary vertex finding is discussed. The observed z-positions of the charged primary tracks in a bunch crossing are modeled by a Gaussian mixture. The mixture parameters are estimated via Markov Chain Monte Carlo (MCMC). Sparsity is achieved by an appropriate prior on the mixture weights. The results are shown and compared to clustering by the expectation-maximization (EM) algorithm.

  20. Dermal uptake of phthalates from clothing: Comparison of model to human participant results

    DEFF Research Database (Denmark)

    Morrison, G. C.; Weschler, Charles J.; Beko, G.

    2017-01-01

    In this research, we extend a model of transdermal uptake of phthalates to include a layer of clothing. When compared with experimental results, this model better estimates dermal uptake of diethylphthalate and di-n-butylphthalate (DnBP) than a previous model. The model predictions are consistent...