WorldWideScience

Sample records for ii model validation

  1. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  2. Filament winding cylinders. II - Validation of the process model

    Science.gov (United States)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  3. High Performance Computing Application: Solar Dynamo Model Project II, Corona and Heliosphere Component Initialization, Integration and Validation

    Science.gov (United States)

    2015-06-24

    allocate solar heating into any location of the corona . Its total contribution depended on the integration of the unsigned magnetic flux at 1 Rs...AFRL-RD-PS- TR-2015-0028 AFRL-RD-PS- TR-2015-0028 HIGH PERFORMANCE COMPUTING APPLICATION: SOLAR DYNAMO MODEL PROJECT II; CORONA AND HELIOSPHERE...Dynamo Model Project II, Corona and Heliosphere Component Initialization, Integration and Validation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  4. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    Energy Technology Data Exchange (ETDEWEB)

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D. [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California (United States); Rose, Brent S. [Harvard Radiation Oncology Program, Harvard Medical School, Boston, Massachusetts (United States); Wu, John; Noticewala, Sonal [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California (United States); McHale, Michael T. [Department of Reproductive Medicine, Division of Gynecologic Oncology, University of California San Diego, La Jolla, California (United States); Yashar, Catheryn M. [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California (United States); Vaida, Florin [Department of Family and Preventive Medicine, Biostatistics and Bioinformatics, University of California San Diego Medical Center, San Diego, California (United States); Mell, Loren K., E-mail: lmell@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California (United States)

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (ω), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  5. [Factor models of the Beck Depression Inventory-II. Validation with coronary patients and a critique of Ward's model].

    Science.gov (United States)

    del Pino Pérez, Antonio; Ibáñez Fernández, Ignacio; Bosa Ojeda, Francisco; Dorta González, Ruth; Gaos Miezoso, María Teresa

    2012-02-01

    The objective of this study was to validate in a sample of 205 coronary patients a factor model for the BDI-II, especially a model that would allow for modeling of depressive symptoms after explicitly removing bias related to somatic symptoms of depression that would overlap those of heart disease. Exploratory and confirmatory factor analyses for ordinal data were conducted. A one-factor model, six correlated two-factor models and, derivatives thereof, seven models with a single General Depression factor and two uncorrelated factors, were analyzed. Exploratory analysis extracted two factors, Somatic-affective and Cognitive. Confirmatory factor analyses showed the worst fit for the one-factor model. Two-factor models were surpassed in goodness of fit by the models of general-factor and group factors. Among these, the General, Somatic-affective and Cognitive (G-Sa-C) model of Beck with students is noteworthy. The reduced General, Somatic and Cognitive (G-S-C) model of Ward showed the worst goodness of fit. Our model surpasses the cutoff criteria of all fit indexes. We conclude that the inclusion of a general-factor and group factors in all the models surpasses the results of G-S-C model and, therefore, questions it. The G-Sa-C model is strengthened.

  6. Tools for system validation. Dynamic modelling of the direct condenser at Sandvik II in Vaexjoe; Hjaelpmedel foer systemvalidering. Dynamisk modellering av direktkondensorn paa Sandvik II i Vaexjoe

    Energy Technology Data Exchange (ETDEWEB)

    Raaberg, Martin [Dynasim AB, Lund (Sweden); Tuszynski, Jan [Sycon Energikonsult AB, Malmoe (Sweden)

    2002-04-01

    The project reported here aimed to test the suitability of existing computer tools for modelling of energy processes. The suggested use for the models are at the early tests and validations of new, refurbished or modernised thermal plants. The technique presented in this report should be applicable for clarification of the scope of delivery and testing for both the process and tile control system. The validation process can thus be simplified, allowing risk reduction and predictability of the commissioning. The main delays and economical misfortune often occurs during commissioning. This report should prove the feasibility of the purchase routines where purchaser, vendor and quality inspection will use a common model of the process to validate system requirements and specifications. Later on it is used to validate structure and predefine testing. Thanks to agreement on the common model, early tests can be conducted on complex systems, minimizing the investment risks. The modelling reported here concerns the direct condenser at Sandvik 11, power and heating plant owned by Vaexjoe Energi AB in Sweden. We have chosen the direct condenser because it is an existing, well-documented and well-defined subsystem of high complexity in both structure and operation. Heavy transients made commissioning and test runs of similar condensers throughout Sweden costly and troublesome. The work resulted in an open, general, and physically correct model. The model can easily be re-dimensioned through physical parameters of common use. The control system modelled corresponds to the actual control system at the Sandvik II plant. Any improvement or deep validation of the controllers was not included in this work. The suitability is shown through four simulation cases. Three cases are based on a registered plant operation during a turbine trip. The first test case uses present plant data, the second an old steam valve actuator and the third uses the old actuator and an error in level

  7. Study on Lumped Kinetic Model for FDFCC II. Validation and Prediction of Model

    Institute of Scientific and Technical Information of China (English)

    Wu Feiyue; Weng Huixin; Luo Shixian

    2008-01-01

    On the basis of formulating the 9-lump kinetic model for gasoline catalytic upgrading and the 12-lump kinetic model for heavy oil FCC, this paper is aimed at development of a combined kinetic model for a typical FDFCC process after analyzing the coupled relationship and combination of these two models. The model is also verified by using commercial data, the results of which showed that the model can better predict the product yields and their quality, with the relative errors between the main products of the unit and commercial data being less than five percent. Furthermore, the combined model is used to predict and optimize the operating conditions for gasoline riser and heavy oil riser in FDFCC. So this paper can offer some guidance for the processing of FDFCC and is instructive to model research and development of such multi-reactor process and combined process.

  8. Fluids with competing interactions. II. Validating a free energy model for equilibrium cluster size

    Science.gov (United States)

    Bollinger, Jonathan A.; Truskett, Thomas M.

    2016-08-01

    Using computer simulations, we validate a simple free energy model that can be analytically solved to predict the equilibrium size of self-limiting clusters of particles in the fluid state governed by a combination of short-range attractive and long-range repulsive pair potentials. The model is a semi-empirical adaptation and extension of the canonical free energy-based result due to Groenewold and Kegel [J. Phys. Chem. B 105, 11702-11709 (2001)], where we use new computer simulation data to systematically improve the cluster-size scalings with respect to the strengths of the competing interactions driving aggregation. We find that one can adapt a classical nucleation like theory for small energetically frustrated aggregates provided one appropriately accounts for a size-dependent, microscopic energy penalty of interface formation, which requires new scaling arguments. This framework is verified in part by considering the extensive scaling of intracluster bonding, where we uncover a superlinear scaling regime distinct from (and located between) the known regimes for small and large aggregates. We validate our model based on comparisons against approximately 100 different simulated systems comprising compact spherical aggregates with characteristic (terminal) sizes between six and sixty monomers, which correspond to wide ranges in experimentally controllable parameters.

  9. From steady-state to synchronized yeast glycolytic oscillations II: model validation.

    Science.gov (United States)

    du Preez, Franco B; van Niekerk, David D; Snoep, Jacky L

    2012-08-01

    In an accompanying paper [du Preez et al., (2012) FEBS J279, 2810-2822], we adapt an existing kinetic model for steady-state yeast glycolysis to simulate limit-cycle oscillations. Here we validate the model by testing its capacity to simulate a wide range of experiments on dynamics of yeast glycolysis. In addition to its description of the oscillations of glycolytic intermediates in intact cells and the rapid synchronization observed when mixing out-of-phase oscillatory cell populations (see accompanying paper), the model was able to predict the Hopf bifurcation diagram with glucose as the bifurcation parameter (and one of the bifurcation points with cyanide as the bifurcation parameter), the glucose- and acetaldehyde-driven forced oscillations, glucose and acetaldehyde quenching, and cell-free extract oscillations (including complex oscillations and mixed-mode oscillations). Thus, the model was compliant, at least qualitatively, with the majority of available experimental data for glycolytic oscillations in yeast. To our knowledge, this is the first time that a model for yeast glycolysis has been tested against such a wide variety of independent data sets. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database and can be accessed at http://jjj.biochem.sun.ac.za/database/dupreez/index.html. © 2012 The Authors Journal compilation © 2012 FEBS.

  10. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Science.gov (United States)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  11. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  12. Failure mode transition in AHSS resistance spot welds. Part II: Experimental investigation and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Pouranvari, M., E-mail: mpouranvari@yahoo.com [Young Researchers Club, Dezful Branch, Islamic Azad University, Dezful (Iran, Islamic Republic of); Marashi, S.P.H.; Safanama, D.S. [Mining and Metallurgical Engineering Department, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2011-11-15

    Highlights: {yields} Interfacial to pullout failure mode transition for AHSS RSWs is experimentally studied. {yields} Relation between failure mode and metallurgical factors of AHSS RSW is studied. {yields} HAZ softening reduces FZ size require to ensure pullout failure. {yields} HAZ softening enhances energy absorption capability of AHSS RSW. {yields} Good agreement between model prediction and experimental results was observed. - Abstract: The objective of this paper is to investigate and analyze the transition criteria from interfacial to pullout failure mode in AHSS resistance spot welds during the tensile-shear test by the use of both experimental and analytical approaches. Spot welds were made on three dual phase steel grades including DP600, DP780 and DP980. A low strength drawing quality special killed (DQSK) steel and AISI 304 austenitic stainless steel were also tested as a baseline for comparison. The microstructure and mechanical strength of the welds were characterized using metallographic techniques and the tensile-shear testing. Correlations among critical fusion zone (FZ) size required to ensure the pullout failure mode, weld microstructure and weld hardness characteristics were developed. It was found that critical FZ size increases in the order of DQSK, DP600, DP980, DP780 and AISI304. No direct relationship was found between the tensile strength of the base metal and the critical FZ size. It was concluded that low hardness ratio of FZ to pullout failure location and high susceptibility to form shrinkage voids are two primary reasons for high tendency of AHSS to fail in interfacial mode. HAZ softening can improve RSW mechanical performance in terms of load bearing capacity and energy absorption capability. This phenomenon promotes PF mode at smaller FZ sizes. This fact can explain smaller critical FZ size measured for DP980 in comparison with DP780. The results obtained from the model were compared to the experimental results and the literature

  13. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    Science.gov (United States)

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  14. Organizational Effectiveness: Development and Validation of Integrated Models. Report II. Empirical Studies of Organizational Effectiveness Using Multivariate Models

    Science.gov (United States)

    1982-04-01

    adaptivity Multidimensionality 20. A trACT (Cirtite s ,yers nif e a and ideriify by block ,,mber) This report is in three parts, each summarizing an...I~ , . 0N. 1N .7 (6 16- 1 C6 -w N 7 ( CA a, .7 ( O 0 (N ft * U, i , ’C - 0 a’ I - CI~0 ’. IN i I N - U , I o ti- a 0’ I N C~ t39 2 - IN IN...i c t W -, C. C6 0 - -7 -r - 0 3 Z -6- - - . . .. . . . . 40 I I i I A I II I I I t .0N . t 3i N 3 C 3 N = N N i 01 C N N =N OO w -s ~0 § - -V ~ 4- 0

  15. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...... of models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....... of models with regards to their purpose, character, field of application and time dimension inherently calls for a similar diversity in validation approaches. A classification of models in terms of the mentioned elements is presented and used to shed light on possible types of validation leading...

  16. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  17. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  18. Lidar measurements during a haze episode in Penang, Malaysia and validation of the ECMWF MACC-II model

    Science.gov (United States)

    Khor, Wei Ying; Lolli, Simone; Hee, Wan Shen; Lim, Hwee San; Jafri, M. Z. Mat; Benedetti, Angela; Jones, Luke

    2015-04-01

    Haze is a phenomenon which occurs when there is a great amount of tiny particulates suspended in the atmosphere. During the period of March 2014, a long period of haze event occurred in Penang, Malaysia. The haze condition was measured and monitored using a ground-based Lidar system. By using the measurements obtained, we evaluated the performance of the ECMWF MACC-II model. Lidar measurements showed that there was a thick aerosol layer confined in the planetary boundary layer (PBL) with extinction coefficients exceeding values of 0.3 km-1. The model however has underestimated the atmospheric conditions in Penang. Backward trajectories analysis was performed to identify aerosols sources and transport. It is speculated that the aerosols came from the North-East direction which was influenced by the North-East monsoon wind and some originated from the central eastern coast of Sumatra along the Straits of Malacca.

  19. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part II: Probabilistic model and validation

    Science.gov (United States)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    In Part I of this study, some new theorems, corollaries and lemmas on circularly-symmetric complex normal ratio distribution have been mathematically proved. This part II paper is dedicated to providing a rigorous treatment of statistical properties of raw scalar transmissibility functions at an arbitrary frequency line. On the basis of statistics of raw FFT coefficients and circularly-symmetric complex normal ratio distribution, explicit closed-form probabilistic models are established for both multivariate and univariate scalar transmissibility functions. Also, remarks on the independence of transmissibility functions at different frequency lines and the shape of the probability density function (PDF) of univariate case are presented. The statistical structures of probabilistic models are concise, compact and easy-implemented with a low computational effort. They hold for general stationary vector processes, either Gaussian stochastic processes or non-Gaussian stochastic processes. The accuracy of proposed models is verified using numerical example as well as field test data of a high-rise building and a long-span cable-stayed bridge. This study yields new insights into the qualitative analysis of the uncertainty of scalar transmissibility functions, which paves the way for developing new statistical methodologies for modal analysis, model updating or damage detection using responses only without input information.

  20. Multi-dimensional boron transport modeling in subchannel approach: Part II. Validation of CTF boron tracking model and adding boron precipitation model

    Energy Technology Data Exchange (ETDEWEB)

    Ozdemir, Ozkan Emre, E-mail: ozdemir@psu.edu; Avramova, Maria N., E-mail: mna109@psu.edu

    2014-10-15

    Highlights: • Validation of implemented multi-dimensional subchannel boron transport model. • Extension of boron transport model to entrained droplets. • Implementation of boron precipitation model. • Testing of the boron precipitation model under transient condition. - Abstract: The risk of small-break loss of coolant accident (SB-LOCA) and other reactivity initiated transients caused by boron dilution in the light water reactors (LWRs), and the complications of tracking the soluble boron concentration experimentally inside the primary coolant have stimulated the interest in computational studies for accurate boron tracking simulations in nuclear reactors. In Part I of this study, the development and implementation of a multi-dimensional boron transport model with modified Godunov scheme based on a subchannel approach within the COBRA-TF (CTF) thermal-hydraulic code was presented. The modified Godunov scheme approach with a physical diffusion term was determined to provide the most accurate and precise solution. Current paper extends these conclusions and presents the model validation studies against experimental data from the Rossendorf coolant mixing model (ROCOM) test facility. In addition, the importance of the two-phase flow characteristics in modeling boron transient are emphasized, especially during long-term cooling period after the loss of coolant accident (LOCA) condition in pressurized water reactors (PWRs). The CTF capabilities of boron transport modeling are further improved based on the three-field representation of the two-phase flow utilized in the code. The boron transport within entrained droplets is modeled, and a model for predicting the boron precipitation under transient conditions is developed and tested. It is aimed to extend the applicability of CTF to reactor transient simulations, and particularly to a large-break loss of coolant accident (LB-LOCA) analysis.

  1. Exchange couplings for Mn ions in CdTe: Validity of spin models for dilute magnetic II-VI semiconductors

    Science.gov (United States)

    Linneweber, Thorben; Bünemann, Jörg; Löw, Ute; Gebhard, Florian; Anders, Frithjof

    2017-01-01

    We employ density-functional theory (DFT) in the generalized gradient approximation (GGA) and its extensions GGA +U and GGA+Gutzwiller to calculate the magnetic exchange couplings between pairs of Mn ions substituting Cd in a CdTe crystal at very small doping. DFT(GGA) overestimates the exchange couplings by a factor of 3 because it underestimates the charge-transfer gap in Mn-doped II-VI semiconductors. Fixing the nearest-neighbor coupling J1 to its experimental value in GGA +U , in GGA+Gutzwiller, or by a simple scaling of the DFT(GGA) results provides acceptable values for the exchange couplings at second-, third-, and fourth-neighbor distances in Cd(Mn)Te, Zn(Mn)Te, Zn(Mn)Se, and Zn(Mn)S. In particular, we recover the experimentally observed relation J4>J2,J3 . The filling of the Mn 3 d shell is not integer, which puts the underlying Heisenberg description into question. However, using a few-ion toy model the picture of a slightly extended local moment emerges so that an integer 3 d -shell filling is not a prerequisite for equidistant magnetization plateaus, as seen in experiment.

  2. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities - Phase II

    Energy Technology Data Exchange (ETDEWEB)

    Strons, Philip [Argonne National Lab. (ANL), Argonne, IL (United States); Bailey, James L. [Argonne National Lab. (ANL), Argonne, IL (United States); Davis, John [Argonne National Lab. (ANL), Argonne, IL (United States); Grudzinski, James [Argonne National Lab. (ANL), Argonne, IL (United States); Hlotke, John [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-01

    In this report we present the results of the Phase II analysis and testing of the flow patterns encountered in the Alpha Gamma Hot Cell Facility (AGHCF), as well as the results from an opportunity to expand upon field test work from Phase I by the use of a Class IIIb laser. The addition to the Phase I work is covered before proceeding to the results of the Phase II work, followed by a summary of findings.

  3. Validated dynamic flow model

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2011-01-01

    The purpose with this deliverable 2.5 is to use fresh experimental data for validation and selection of a flow model to be used for control design in WP3-4. Initially the idea was to investigate the models developed in WP2. However, in the project it was agreed to include and focus on a additive...... model turns out not to be useful for prediction of the flow. Moreover, standard Box Jenkins model structures and multiple output auto regressive models proves to be superior as they can give useful predictions of the flow....

  4. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott (Michigan); (Kansas); (HWMRI)

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  5. MEDSLIK-II, a Lagrangian marine oil spill model for short-term forecasting – Part 2: Numerical simulations and validations

    Directory of Open Access Journals (Sweden)

    M. De Dominicis

    2013-03-01

    Full Text Available In this paper we use MEDSLIK-II, a Lagrangian marine oil spill model described in Part 1 of this paper (De Dominicis et al., 2013, to simulate oil slick transport and transformation processes for realistic oceanic cases where satellite or drifting buoys data are available for verification. The model is coupled with operational oceanographic currents, atmospheric analyses winds and remote-sensing data for initialization. The sensitivity of the oil spill simulations to several model parameterizations is analyzed and the results are validated using surface drifters and SAR (Synthetic Aperture Radar images in different regions of the Mediterranean Sea. It is found that the forecast skill of Lagrangian trajectories largely depends on the accuracy of the Eulerian ocean currents: the operational models give useful estimates of currents, but high-frequency (hourly and high spatial resolution is required, and the Stokes drift velocity has to be often added, especially in coastal areas. From a numerical point of view, it is found that a realistic oil concentration reconstruction is obtained using an oil tracer grid resolution of about 100 m, with at least 100 000 Lagrangian particles. Moreover, sensitivity experiments to uncertain model parameters show that the knowledge of oil type and slick thickness are, among all the others, key model parameters affecting the simulation results. Considering acceptable for the simulated trajectories a maximum spatial error of the order of three times the horizontal resolution of the Eulerian ocean currents, the predictability skill for particle trajectories is from 1 to 2.5 days depending on the specific current regime. This suggests that re-initialization of the simulations is required every day.

  6. Validation Studies for the Diet History Questionnaire II

    Science.gov (United States)

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  7. Model validation, science and application

    NARCIS (Netherlands)

    Builtjes, P.J.H.; Flossmann, A.

    1998-01-01

    Over the last years there is a growing interest to try to establish a proper validation of atmospheric chemistry-transport (ATC) models. Model validation deals with the comparison of model results with experimental data, and in this way adresses both model uncertainty and uncertainty in, and adequac

  8. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  9. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  10. The influence of synoptic airflow on UK daily precipitation extremes. Part II: regional climate model and E-OBS data validation

    Energy Technology Data Exchange (ETDEWEB)

    Maraun, Douglas [Leibniz Institute of Marine Sciences (IFM-GEOMAR), Duesternbrooker Weg 20, 24105, Kiel (Germany); Osborn, Timothy J. [School of Environmental Sciences, Climatic Research Unit, Norwich (United Kingdom); Rust, Henning W. [Freie Universitaet Berlin, Institut fuer Meteorologie, Berlin (Germany)

    2012-07-15

    We investigate how well the variability of extreme daily precipitation events across the United Kingdom is represented in a set of regional climate models and the E-OBS gridded data set. Instead of simply evaluating the climatologies of extreme precipitation measures, we develop an approach to validate the representation of physical mechanisms controlling extreme precipitation variability. In part I of this study we applied a statistical model to investigate the influence of the synoptic scale atmospheric circulation on extreme precipitation using observational rain gauge data. More specifically, airflow strength, direction and vorticity are used as predictors for the parameters of the generalised extreme value (GEV) distribution of local precipitation extremes. Here we employ this statistical model for our validation study. In a first step, the statistical model is calibrated against a gridded precipitation data set provided by the UK Met Office. In a second step, the same statistical model is calibrated against 14 ERA40 driven 25 km resolution RCMs from the ENSEMBLES project and the E-OBS gridded data set. Validation indices describing relevant physical mechanisms are derived from the statistical models for observations and RCMs and are compared using pattern standard deviation, pattern correlation and centered pattern root mean squared error as validation measures. The results for the different RCMs and E-OBS are visualised using Taylor diagrams. We show that the RCMs adequately simulate moderately extreme precipitation and the influence of airflow strength and vorticity on precipitation extremes, but show deficits in representing the influence of airflow direction. Also very rare extremes are misrepresented, but this result is afflicted with a high uncertainty. E-OBS shows considerable biases, in particular in regions of sparse data. The proposed approach might be used to validate other physical relationships in regional as well as global climate models. (orig.)

  11. The Portuguese long version of the Copenhagen Psychosocial Questionnaire II (COPSOQ II) - a validation study.

    Science.gov (United States)

    Rosário, Susel; Azevedo, Luís F; Fonseca, João A; Nienhaus, Albert; Nübling, Matthias; da Costa, José Torres

    2017-01-01

    Psychosocial risks are now widely recognised as one of the biggest challenges for occupational safety and health (OSH) and a major public health concern. The aim of this paper is to investigate the Portuguese long version of the Copenhagen Psychosocial Questionnaire II (COPSOQ II), in order to analyse the psychometric properties of the instrument and to validate it. The Portuguese COPSOQ II was issued to a total of 745 Portuguese employees from both private and public organisations across several economic sectors at a baseline and then 2 weeks later. Methodological quality appraisal was based on COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) recommendations. An analysis of the psychometric properties of the long version of COPSOQ II (internal consistency, intraclass correlation coefficient, floor and ceiling effects, response rate, missing values, mean and standard deviation, exploratory factor analysis) was performed to determine the validity and reliability of the instrument. The COPSOQ II had a response rate of 60.6% (test) and a follow-up response rate of 59.5% (retest). In general, a Cronbach's alpha of the COPSOQ scales (test and retest) was above the conventional threshold of 0.70. The test-retest reliability estimated by the intraclass correlation coefficient (ICC) showed a higher reliability for most of the scales, above the conventional 0.7, except for eight scales. The proportion of the missing values was less than 1.3%, except for two scales. The average scores and standard deviations showed similar results to the original Danish study, except for eight scales. All of the scales had low floor and ceiling effects, with one exception. Overall, the exploratory factor analysis presented good results in 27 scales assuming a reflective measurement model. The hypothesized factor structure under a reflective model was not supported in 14 scales and for some but not all of these scales the explanation may be a formative

  12. Testing and validating environmental models

    Science.gov (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  13. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models.

    Science.gov (United States)

    Shi, Leming; Campbell, Gregory; Jones, Wendell D; Campagne, Fabien; Wen, Zhining; Walker, Stephen J; Su, Zhenqiang; Chu, Tzu-Ming; Goodsaid, Federico M; Pusztai, Lajos; Shaughnessy, John D; Oberthuer, André; Thomas, Russell S; Paules, Richard S; Fielden, Mark; Barlogie, Bart; Chen, Weijie; Du, Pan; Fischer, Matthias; Furlanello, Cesare; Gallas, Brandon D; Ge, Xijin; Megherbi, Dalila B; Symmans, W Fraser; Wang, May D; Zhang, John; Bitter, Hans; Brors, Benedikt; Bushel, Pierre R; Bylesjo, Max; Chen, Minjun; Cheng, Jie; Cheng, Jing; Chou, Jeff; Davison, Timothy S; Delorenzi, Mauro; Deng, Youping; Devanarayan, Viswanath; Dix, David J; Dopazo, Joaquin; Dorff, Kevin C; Elloumi, Fathi; Fan, Jianqing; Fan, Shicai; Fan, Xiaohui; Fang, Hong; Gonzaludo, Nina; Hess, Kenneth R; Hong, Huixiao; Huan, Jun; Irizarry, Rafael A; Judson, Richard; Juraeva, Dilafruz; Lababidi, Samir; Lambert, Christophe G; Li, Li; Li, Yanen; Li, Zhen; Lin, Simon M; Liu, Guozhen; Lobenhofer, Edward K; Luo, Jun; Luo, Wen; McCall, Matthew N; Nikolsky, Yuri; Pennello, Gene A; Perkins, Roger G; Philip, Reena; Popovici, Vlad; Price, Nathan D; Qian, Feng; Scherer, Andreas; Shi, Tieliu; Shi, Weiwei; Sung, Jaeyun; Thierry-Mieg, Danielle; Thierry-Mieg, Jean; Thodima, Venkata; Trygg, Johan; Vishnuvajjala, Lakshmi; Wang, Sue Jane; Wu, Jianping; Wu, Yichao; Xie, Qian; Yousef, Waleed A; Zhang, Liang; Zhang, Xuegong; Zhong, Sheng; Zhou, Yiming; Zhu, Sheng; Arasappan, Dhivya; Bao, Wenjun; Lucas, Anne Bergstrom; Berthold, Frank; Brennan, Richard J; Buness, Andreas; Catalano, Jennifer G; Chang, Chang; Chen, Rong; Cheng, Yiyu; Cui, Jian; Czika, Wendy; Demichelis, Francesca; Deng, Xutao; Dosymbekov, Damir; Eils, Roland; Feng, Yang; Fostel, Jennifer; Fulmer-Smentek, Stephanie; Fuscoe, James C; Gatto, Laurent; Ge, Weigong; Goldstein, Darlene R; Guo, Li; Halbert, Donald N; Han, Jing; Harris, Stephen C; Hatzis, Christos; Herman, Damir; Huang, Jianping; Jensen, Roderick V; Jiang, Rui; Johnson, Charles D; Jurman, Giuseppe; Kahlert, Yvonne; Khuder, Sadik A; Kohl, Matthias; Li, Jianying; Li, Li; Li, Menglong; Li, Quan-Zhen; Li, Shao; Li, Zhiguang; Liu, Jie; Liu, Ying; Liu, Zhichao; Meng, Lu; Madera, Manuel; Martinez-Murillo, Francisco; Medina, Ignacio; Meehan, Joseph; Miclaus, Kelci; Moffitt, Richard A; Montaner, David; Mukherjee, Piali; Mulligan, George J; Neville, Padraic; Nikolskaya, Tatiana; Ning, Baitang; Page, Grier P; Parker, Joel; Parry, R Mitchell; Peng, Xuejun; Peterson, Ron L; Phan, John H; Quanz, Brian; Ren, Yi; Riccadonna, Samantha; Roter, Alan H; Samuelson, Frank W; Schumacher, Martin M; Shambaugh, Joseph D; Shi, Qiang; Shippy, Richard; Si, Shengzhu; Smalter, Aaron; Sotiriou, Christos; Soukup, Mat; Staedtler, Frank; Steiner, Guido; Stokes, Todd H; Sun, Qinglan; Tan, Pei-Yi; Tang, Rong; Tezak, Zivana; Thorn, Brett; Tsyganova, Marina; Turpaz, Yaron; Vega, Silvia C; Visintainer, Roberto; von Frese, Juergen; Wang, Charles; Wang, Eric; Wang, Junwei; Wang, Wei; Westermann, Frank; Willey, James C; Woods, Matthew; Wu, Shujian; Xiao, Nianqing; Xu, Joshua; Xu, Lei; Yang, Lun; Zeng, Xiao; Zhang, Jialu; Zhang, Li; Zhang, Min; Zhao, Chen; Puri, Raj K; Scherf, Uwe; Tong, Weida; Wolfinger, Russell D

    2010-08-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.

  14. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. II. Validation of species distribution and seasonal variations.

    Science.gov (United States)

    Lunde, Torleif M; Balkew, Meshesha; Korecha, Diriba; Gebre-Michael, Teshome; Massebo, Fekadu; Sorteberg, Asgeir; Lindtjørn, Bernt

    2013-02-25

    The first part of this study aimed to develop a model for Anopheles gambiae s.l. with separate parametrization schemes for Anopheles gambiae s.s. and Anopheles arabiensis. The characterizations were constructed based on literature from the past decades. This part of the study is focusing on the model's ability to separate the mean state of the two species of the An. gambiae complex in Africa. The model is also evaluated with respect to capturing the temporal variability of An. arabiensis in Ethiopia. Before conclusions and guidance based on models can be made, models need to be validated. The model used in this paper is described in part one (Malaria Journal 2013, 12:28). For the validation of the model, a data base of 5,935 points on the presence of An. gambiae s.s. and An. arabiensis was constructed. An additional 992 points were collected on the presence An. gambiae s.l.. These data were used to assess if the model could recreate the spatial distribution of the two species. The dataset is made available in the public domain. This is followed by a case study from Madagascar where the model's ability to recreate the relative fraction of each species is investigated. In the last section the model's ability to reproduce the temporal variability of An. arabiensis in Ethiopia is tested. The model was compared with data from four papers, and one field survey covering two years. Overall, the model has a realistic representation of seasonal and year to year variability in mosquito densities in Ethiopia. The model is also able to describe the distribution of An. gambiae s.s. and An. arabiensis in sub-Saharan Africa. This implies this model can be used for seasonal and long term predictions of changes in the burden of malaria. Before models can be used to improving human health, or guide which interventions are to be applied where, there is a need to understand the system of interest. Validation is an important part of this process. It is also found that one of the main

  15. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  16. Validating Dart Model

    Directory of Open Access Journals (Sweden)

    Mazur Jolanta

    2014-12-01

    Full Text Available The primary objective of the study was to quantitatively test the DART model, which despite being one of the most popular representations of co-creation concept was so far studied almost solely with qualitative methods. To this end, the researchers developed a multiple measurement scale and employed it in interviewing managers. The statistical evidence for adequacy of the model was obtained through CFA with AMOS software. The findings suggest that the DART model may not be an accurate representation of co-creation practices in companies. From the data analysis it was evident that the building blocks of DART had too much of conceptual overlap to be an effective framework for quantitative analysis. It was also implied that the phenomenon of co-creation is so rich and multifaceted that it may be more adequately captured by a measurement model where co-creation is conceived as a third-level factor with two layers of intermediate latent variables.

  17. The five-factor model of the Positive and Negative Syndrome Scale - II : A ten-fold cross-validation of a revised model

    NARCIS (Netherlands)

    van der Gaag, Mark; Hoffman, Tonko; Remijsen, Mila; Hijman, Ron; de Haan, Lieuwe; van Meijel, Berno; van Harten, Peter N.; Valmaggia, Lucia; de Hert, Marc; Cuijpers, Anke; Wiersma, Durk

    Objective: The lack of fit of 25 previously published five-factor models for the PANSS items, can be due to the statistics used. The purpose of this study was to use a 'new' statistical method to develop and confirm an improved five-factor model. The improved model is both complex and stable.

  18. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  19. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their credi

  20. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their credi

  1. Validation of the Curiosity and Exploration Inventory-II (CEI-II) Among Chinese University Students in Hong Kong.

    Science.gov (United States)

    Ye, Shengquan; Ng, Ting Kin; Yim, Kin Hang; Wang, Jun

    2015-01-01

    This study aimed at validating the Curiosity and Exploration Inventory-II (CEI-II; Kashdan et al., 2009 ) in a Chinese context. A total of 294 Chinese first-year undergraduate students in Hong Kong completed the CEI-II and measures of satisfaction with university life, the Big Five personality traits, and human values. The results of exploratory structural equation modeling, parallel analysis, and confirmatory factor analysis supported a 1-factor solution and did not replicate the original 2-factor structure. Time invariance of the 1-factor structure was obtained among 242 participants who completed the questionnaires again after 4 months. The latent means and correlation indicated that curiosity as measured by the CEI-II was quite stable over the period of investigation. The CEI-II was found to be positively correlated with satisfaction with university life, extraversion, agreeableness, conscientiousness, openness to experience, and openness to change values, but negatively with neuroticism and conservation values. The results of hierarchical multiple regression analyses showed that the CEI-II score had incremental validity above and beyond the Big Five personality traits in predicting human values and satisfaction with university life.

  2. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  3. A statistical-dynamical scheme for reconstructing ocean forcing in the Atlantic. Part II: methodology, validation and application to high-resolution ocean models

    Energy Technology Data Exchange (ETDEWEB)

    Minvielle, Marie; Cassou, Christophe; Terray, Laurent; Najac, Julien [CERFACS/CNRS, Climate Modelling and Global Change Team, Toulouse (France); Bourdalle-Badie, Romain [CERFACS/CNRS, Climate Modelling and Global Change Team, Toulouse (France); MERCATOR Parc Technologique du Canal, Ramonville St Agne (France)

    2011-02-15

    A novel statistical-dynamical scheme has been developed to reconstruct the sea surface atmospheric variables necessary to force an ocean model. Multiple linear regressions are first built over a so-called learning period and over the entire Atlantic basin from the observed relationship between the surface wind conditions, or predictands, and the anomalous large scale atmospheric circulations, or predictors. The latter are estimated in the extratropics by 500 hPa geopotential height weather regimes and in the tropics by low-level wind classes. The transfer function further combined to an analog step is then used to reconstruct all the surface variables fields over 1958-2002. We show that the proposed hybrid scheme is very skillful in reproducing the mean state, the seasonal cycle and the temporal evolution of all the surface ocean variables at interannual timescale. Deficiencies are found in the level of variance especially in the tropics. It is underestimated for 2-m temperature and humidity as well as for surface radiative fluxes in the interannual frequency band while it is slightly overestimated at higher frequency. Decomposition in empirical orthogonal function (EOF) shows that the spatial and temporal coherence of the forcing fields is however very well captured by the reconstruction method. For dynamical downscaling purposes, reconstructed fields are then interpolated and used to carry out a high-resolution oceanic simulation using the NATL4 (1/4 ) model integrated over 1979-2001. This simulation is compared to a reference experiment where the original observed forcing fields are prescribed instead. Mean states between the two experiments are virtually undistinguishable both in terms of surface fluxes and ocean dynamics estimated by the barotropic and the meridional overturning streamfunctions. The 3-dimensional variance of the simulated ocean is well preserved at interannual timescale both for temperature and salinity except in the tropics where it is

  4. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    Science.gov (United States)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  5. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    Science.gov (United States)

    Khoshkbar Sadigh, Arash

    and the power rating for loads, is presented to prioritize which loads, lines and cables the meters should be installed at to have the most effect on model validation.

  6. Validation for a recirculation model.

    Science.gov (United States)

    LaPuma, P T

    2001-04-01

    Recent Clean Air Act regulations designed to reduce volatile organic compound (VOC) emissions have placed new restrictions on painting operations. Treating large volumes of air which contain dilute quantities of VOCs can be expensive. Recirculating some fraction of the air allows an operator to comply with environmental regulations at reduced cost. However, there is a potential impact on employee safety because indoor pollutants will inevitably increase when air is recirculated. A computer model was developed, written in Microsoft Excel 97, to predict compliance costs and indoor air concentration changes with respect to changes in the level of recirculation for a given facility. The model predicts indoor air concentrations based on product usage and mass balance equations. This article validates the recirculation model using data collected from a C-130 aircraft painting facility at Hill Air Force Base, Utah. Air sampling data and air control cost quotes from vendors were collected for the Hill AFB painting facility and compared to the model's predictions. The model's predictions for strontium chromate and isocyanate air concentrations were generally between the maximum and minimum air sampling points with a tendency to predict near the maximum sampling points. The model's capital cost predictions for a thermal VOC control device ranged from a 14 percent underestimate to a 50 percent overestimate of the average cost quotes. A sensitivity analysis of the variables is also included. The model is demonstrated to be a good evaluation tool in understanding the impact of recirculation.

  7. Validation of Magnetospheric Magnetohydrodynamic Models

    Science.gov (United States)

    Curtis, Brian

    Magnetospheric magnetohydrodynamic (MHD) models are commonly used for both prediction and modeling of Earth's magnetosphere. To date, very little validation has been performed to determine their limits, uncertainties, and differences. In this work, we performed a comprehensive analysis using several commonly used validation techniques in the atmospheric sciences to MHD-based models of Earth's magnetosphere for the first time. The validation techniques of parameter variability/sensitivity analysis and comparison to other models were used on the OpenGGCM, BATS-R-US, and SWMF magnetospheric MHD models to answer several questions about how these models compare. The questions include: (1) the difference between the model's predictions prior to and following to a reversal of Bz in the upstream interplanetary field (IMF) from positive to negative, (2) the influence of the preconditioning duration, and (3) the differences between models under extreme solar wind conditions. A differencing visualization tool was developed and used to address these three questions. We find: (1) For a reversal in IMF Bz from positive to negative, the OpenGGCM magnetopause is closest to Earth as it has the weakest magnetic pressure near-Earth. The differences in magnetopause positions between BATS-R-US and SWMF are explained by the influence of the ring current, which is included in SWMF. Densities are highest for SWMF and lowest for OpenGGCM. The OpenGGCM tail currents differ significantly from BATS-R-US and SWMF; (2) A longer preconditioning time allowed the magnetosphere to relax more, giving different positions for the magnetopause with all three models before the IMF Bz reversal. There were differences greater than 100% for all three models before the IMF Bz reversal. The differences in the current sheet region for the OpenGGCM were small after the IMF Bz reversal. The BATS-R-US and SWMF differences decreased after the IMF Bz reversal to near zero; (3) For extreme conditions in the solar

  8. Software Validation via Model Animation

    Science.gov (United States)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  9. Ensuring the Validity of the Micro Foundation in DSGE Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    The presence of i) stochastic trends, ii) deterministic trends, and/or iii) stochastic volatil- ity in DSGE models may imply that the agents' objective functions attain infinite values. We say that such models do not have a valid micro foundation. The paper derives sufficient condi- tions which...... ensure that the objective functions of the households and the firms are finite even when various trends and stochastic volatility are included in a standard DSGE model. Based on these conditions we test the validity of the micro foundation in six DSGE models from the literature. The models of Justiniano...... in these models remains to be established....

  10. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  11. Obstructive lung disease models: what is valid?

    Science.gov (United States)

    Ferdinands, Jill M; Mannino, David M

    2008-12-01

    Use of disease simulation models has led to scrutiny of model methods and demand for evidence that models credibly simulate health outcomes. We sought to describe recent obstructive lung disease simulation models and their validation. Medline and EMBASE were used to identify obstructive lung disease simulation models published from January 2000 to June 2006. Publications were reviewed to assess model attributes and four types of validation: first-order (verification/debugging), second-order (comparison with studies used in model development), third-order (comparison with studies not used in model development), and predictive validity. Six asthma and seven chronic obstructive pulmonary disease models were identified. Seven (54%) models included second-order validation, typically by comparing observed outcomes to simulations of source study cohorts. Seven (54%) models included third-order validation, in which modeled outcomes were usually compared qualitatively for agreement with studies independent of the model. Validation endpoints included disease prevalence, exacerbation, and all-cause mortality. Validation was typically described as acceptable, despite near-universal absence of criteria for judging adequacy of validation. Although over half of recent obstructive lung disease simulation models report validation, inconsistencies in validation methods and lack of detailed reporting make assessing adequacy of validation difficult. For simulation modeling to be accepted as a tool for evaluating clinical and public health programs, models must be validated to credibly simulate health outcomes of interest. Defining the required level of validation and providing guidance for quantitative assessment and reporting of validation are important future steps in promoting simulation models as practical decision tools.

  12. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  13. Factorial validity and measurement invariance across intelligence levels and gender of the overexcitabilities questionnaire-II (OEQ-II).

    Science.gov (United States)

    Van den Broeck, Wim; Hofmans, Joeri; Cooremans, Sven; Staels, Eva

    2014-03-01

    The concept of overexcitability, derived from Dabrowski's theory of personality development, offers a promising approach for the study of the developmental dynamics of giftedness. The present study aimed at (a) examining the factorial structure of the Overexcitabilities Questionnaire-II scores (OEQ-II) and (b) testing measurement invariance of these scores across intelligence and gender. A sample of 641 Dutch-speaking adolescents from 11 to 15 years old, 363 girls and 278 boys, participated in this study. Results showed that a model without cross-loadings did not fit the data well (using confirmatory factor analysis), whereas a factor model in which all cross-loadings were included yielded fit statistics that were in support of the factorial structure of the OEQ-II scores (using exploratory structural equation modeling). Furthermore, our findings supported the assumption of (partial) strict measurement invariance of the OEQ-II scores across intelligence levels and across gender. Such levels of measurement invariance allow valid comparisons between factor means and factor relationships across groups. In particular, the gifted group scored significantly higher on intellectual and sensual overexcitability (OE) than the nongifted group, girls scored higher on emotional and sensual OE than boys, and boys scored higher on intellectual and psychomotor OE than girls. 2014 APA

  14. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  15. Ensuring the Validity of the Micro Foundation in DSGE Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    The presence of i) stochastic trends, ii) deterministic trends, and/or iii) stochastic volatil- ity in DSGE models may imply that the agents' objective functions attain infinite values. We say that such models do not have a valid micro foundation. The paper derives sufficient condi- tions which...... ensure that the objective functions of the households and the firms are finite even when various trends and stochastic volatility are included in a standard DSGE model. Based on these conditions we test the validity of the micro foundation in six DSGE models from the literature. The models of Justiniano...... & Primiceri (American Economic Review, forth- coming) and Fernández-Villaverde & Rubio-Ramírez (Review of Economic Studies, 2007) do not satisfy these sufficient conditions, or any other known set of conditions ensuring finite values for the objective functions. Thus, the validity of the micro foundation...

  16. On validation of multibody musculoskeletal models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; de Zee, Mark; Andersen, Michael Skipper;

    2012-01-01

    This paper reviews the opportunities to validate multibody musculoskeletal models in view of the current transition of musculoskeletal modelling from a research topic to a practical simulation tool in product design, healthcare and other important applications. This transition creates a new need...... for improvement of the validation of multibody musculoskeletal models are pointed out and directions for future research in the field are proposed. It is our hope that a more structured approach to model validation can help to improve the credibility of musculoskeletal models....

  17. Optimal Data Split Methodology for Model Validation

    CERN Document Server

    Morrison, Rebecca; Terejanu, Gabriel; Miki, Kenji; Prudhomme, Serge

    2011-01-01

    The decision to incorporate cross-validation into validation processes of mathematical models raises an immediate question - how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition of the data subject to certain constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to predictions of a given quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This framework also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and the computational scientists, who strive to determine if the model satisfies both the modeler's and decision maker's requirements. We also note...

  18. Validation of systems biology models

    NARCIS (Netherlands)

    Hasdemir, D.

    2015-01-01

    The paradigm shift from qualitative to quantitative analysis of biological systems brought a substantial number of modeling approaches to the stage of molecular biology research. These include but certainly are not limited to nonlinear kinetic models, static network models and models obtained by the

  19. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  20. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  1. Validation of the THIRST steam generator thermalhydraulic code against the CLOTAIRE phase II experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Pietralik, J.M.; Campagna, A.O.; Frisina, V.C

    1999-04-01

    Steam generator thermalhydraulic codes are frequently used to calculate both global and local parameters inside a stern generator. The global parameters include heat transfer output, recirculation ratio, outlet temperatures, and pressure drops for operating and abnormal conditions. The local parameters are used in further analyses of flow-induced vibration, fretting wear, sludge deposition, and flow-accelerated corrosion. For these purposes, detailed, 3-dimensional 2-phase flow and heat transfer parameters are needed. To make the predictions more accurate and reliable, the codes need to be validated in geometries representative of real conditions. One such study is an international co-operative experimental program called CLOTAIRE, which is based in France. The CANDU Owners Group(COG) participated in the first two phases of the program. The results of the validation of Phase 1 were presented at the 1994 Steam Generator and Heat Exchanger Conference, and the results of the validation of Phase II are the subject of this report. THIRST is a thermalhydraulic, finite-volume code used to predict flow and heat transfer in steam generators. The local results of CLOTAIRE Phase II were used to validate the code. The results consist of the measurements of void fraction and axial gas-phase velocity in the U-bend region. The measurements were done using bi-optical probes. A comparison of global results indicates that the THIRST predictions, with the Chisholm void fraction model, are within 2% to 3% of the experimental results. Using THIRST with the homogeneous void fraction model, the global results were less accurate but still gave very good predictions; the greatest error was 10% for the separator pressure drop. Comparisons of the local predictions for void fraction and axial gas-phase velocity show good agreement. The Chisholm void fraction model generally gives better agreement with the experimental data, whereas the homogeneous model tends to overpredict the void fraction

  2. Validation of the THIRST steam generator thermalhydraulic code against the CLOTAIRE phase II experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Pietralik, J.M.; Campagna, A.O.; Frisina, V.C. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    1998-07-01

    Steam generator thermalhydraulic codes are used frequently to calculate both global and local parameters inside the steam generator. The former include heat transfer output, recirculation ratio, outlet temperatures, and pressure drops for operating and abnormal conditions. The latter are used in further analyses of flow-induced vibration, fretting wear, sludge deposition, and flow accelerated corrosion. For these purposes, detailed, three-dimensional two-phase flow and heat transfer parameters are needed. To make the predictions more accurate and reliable, the codes need to be validated in geometries representative of real conditions. One such study is an international cooperative experimental program called CLOTAIRE based in France. COG participated in the first two phases of the program; the results of the validation of Phase 1 were presented at the 1994 Steam Generator and Heat Exchanger Conference, and the results of the validation of Phase II are the subject of this paper. THIRST is a thermalhydraulic, finite volume code to predict the flow and heat transfer in steam generators. The local results of CLOTAIRE Phase II have been used to validate the code. These consist of the measurements of void fraction and axial gas-phase velocity in the U-bend region. The measurements were done using bi-optical probes. A comparison of global results indicates that the THIRST predictions, with the Chisholm void fraction model, are within 2 to 3% of the experimental results. Using THIRST with the homogeneous void fraction model, the global results were less accurate but still well predicted with the greatest error of 10% for the separator pressure drop. Comparisons of the local predictions for void fraction and axial gas-phase show good agreement. The Chisholm void fraction model generally gives better agreement with the experimental data while the homogeneous model tends to overpredict the void fraction and underpredict the gas velocity. (author)

  3. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  4. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    D J Ewins

    2000-06-01

    In this paper, a review is presented of the various methods which are available for the purpose of performing a systematic comparison and correlation between two sets of vibration data. In the present case, the application of interest is in conducting this correlation process as a prelude to model correlation or updating activity.

  5. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    Energy Technology Data Exchange (ETDEWEB)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  6. Validation of the Hot Strip Mill Model

    Energy Technology Data Exchange (ETDEWEB)

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  7. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, L.F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  8. Evaluating the Validity and Reliability of PDQ-II and Comparison with DDST-II for Two Step Developmental Screening

    Directory of Open Access Journals (Sweden)

    Anooshirvan Kazemnejad

    2011-09-01

    Full Text Available Objective:This research was designed to identify the validity and reliability of the Prescreening Developmental Questionnaire 2 (PDQ-II in Tehran in comparison with the Denver Developmental Screening Test-II (DDST-II. Methods: After translation and back translation, the final Persian version of test was verified by three pediatricians and also by reviewing relevant literature for content validity. The test was performed on 237 children ranging from 0 to 6 years old, recruited by convenient sampling, from four health care clinics in Tehran city. They were also evaluated by DDST II simultaneously. Interrater methods and Cronbachs α were used to determine reliability of the test. The Kappa agreement coefficient between PDQ and DDST II was determined. The data was analyzed by SPSS software. Findings:All of the questions in PDQ had satisfactory content validity. The total Cronbachs α coefficient of 0-9 months, 9-24 months, 2-4 years and 4-6 years questionnaires were 0.951, 0.926, 0.950 and 0.876, respectively. The Kappa measure of agreement for interrater tests was 0.89. The estimated agreement coefficient between PDQ and DDST II was 0.383. Based on two different categorizing possibilities for questionable scores, that is, "Delayed" or "Normal", sensitivity and specificity of PDQ was determined to be 35.7-63% and 75.8-92.2%, respectively. Conclusion:PDQ has a good content validity and reliability and moderate sensitivity and specificity in comparison with the DDST-II, but by considering their relatively weak agreement coefficient, using it along with DDST-II for a two-stage developmental screening process, remains doubtful.

  9. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... Metro became operational in autumn 2002. We observed that forecasts from the demand sub-models agree well with the data from the 2000 national travel survey, with the mode choice forecasts in particular being a good match with the observed modal split. The results of the 2000 car assignment model...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  10. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  11. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  12. Feature extraction for structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois [Los Alamos National Laboratory; Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Nishio, Mayuko [UNIV OF TOKYO; Worden, Keith [UNIV OF SHEFFIELD; Takeda, Nobuo [UNIV OF TOKYO

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  13. Validating the Beck Depression Inventory-II for Hong Kong Community Adolescents

    Science.gov (United States)

    Byrne, Barbara M.; Stewart, Sunita M.; Lee, Peter W. H.

    2004-01-01

    The primary purpose of this study was to test for the validity of a Chinese version of the Beck Depression Inventory-II (C-BDI-II) for use with Hong Kong community (i.e., nonclinical) adolescents. Based on a randomized triadic split of the data (N = 1460), we conducted exploratory factor analysis on Group1 (n = 486) and confirmatory factor…

  14. Modeling and Analysis of NGC System using Ptolemy II

    Directory of Open Access Journals (Sweden)

    Archana Sreekumar

    2015-09-01

    Full Text Available Model based system design has been used in real time embedded systems for validating and testing during the development lifecycle. Computation models - synchronous dataflow model (SDF and Discrete Event (DE have been used and finite state machine has been integrated with SDF and Discrete Event (DE modeling domains for simulating the functionalities in the system. Here a case study of resource augmented Navigation, Guidance and Control unit of onboard computers in satellite launch vehicle has been selected as a frame work and fault tolerant algorithm has been modeled and simulated with Ptolemy II. Feasibility of the scheduling of the fault tolerant algorithm has been analyzed and dependencies existing between different components and processes in the system have been investigated. The future work consists of modeling original functionality of NGC units inside each state of FSM and can be validated for the correct performance. Non-deterministic communication and clock drifts can be accounted into the model.

  15. Expert and construct validity of the Simbionix GI Mentor II endoscopy simulator for colonoscopy

    NARCIS (Netherlands)

    Koch, A.D.; Buzink, S.N.; Heemskerk, J.; Botden, S.M.B.I.; Veenendaal, R; Jakimowicz, J.J.; Schoon, E.J.

    2007-01-01

    Objectives The main objectives of this study were to establish expert validity (a convincing realistic representation of colonoscopy according to experts) and construct validity (the ability to discriminate between different levels of expertise) of the Simbionix GI Mentor II virtual reality (VR) si

  16. Validity evidence based on internal structure of scores on the Spanish version of the Self-Description Questionnaire-II.

    Science.gov (United States)

    Ingles, Cándido J; Torregrosa, María S; Hidalgo, María D; Nuñez, Jose C; Castejón, Juan L; García-Fernández, Jose M; Valles, Antonio

    2012-03-01

    The aim of this study was to analyze the reliability and validity evidence of scores on the Spanish version of Self-Description Questionnaire II (SDQ-II). The instrument was administered in a sample of 2022 Spanish students (51.1% boys) from grades 7 to 10. Confirmatory factor analysis (CFA) was used to examine validity evidence based on internal structure drawn from the scores on the SDQ-II. CFA replicated the correlated II first-order factor structure. Furthermore, hierarchical confirmatory factor analysis (HCFA) was used to examine the hierarchical ordering of self-concept, as measured by scores on the Spanish version of the SDQ-II. Although a series of HCFA models were tested to assess academic and non-academic components organization, support for those hierarchical models was weaker than for the correlated 11 first-order factor structure. Results also indicated that scores on the Spanish version of the SDQ-II had internal consistency and test-retest reliability estimates within an acceptable range.

  17. Regimes of validity for balanced models

    Science.gov (United States)

    Gent, Peter R.; McWilliams, James C.

    1983-07-01

    Scaling analyses are presented which delineate the atmospheric and oceanic regimes of validity for the family of balanced models described in Gent and McWilliams (1983a). The analyses follow and extend the classical work of Charney (1948) and others. The analyses use three non-dimensional parameters which represent the flow scale relative to the Earth's radius, the dominance of turbulent or wave-like processes, and the dominant component of the potential vorticity. For each regime, the models that are accurate both at leading order and through at least one higher order of accuracy in the appropriate small parameter are then identified. In particular, it is found that members of the balanced family are the appropriate models of higher-order accuracy over a broad range of parameter regimes. Examples are also given of particular atmospheric and oceanic phenomena which are in the regimes of validity for the different balanced models.

  18. Development of multi-component diesel surrogate fuel models – Part II:Validation of the integrated mechanisms in 0-D kinetic and 2-D CFD spray combustion simulations

    DEFF Research Database (Denmark)

    Poon, Hiew Mun; Pang, Kar Mun; Ng, Hoon Kiat;

    2016-01-01

    The aim of this study is to develop compact yet comprehensive multi-component diesel surrogate fuel models for computational fluid dynamics (CFD) spray combustion modelling studies. The fuel constituent reduced mechanisms including n-hexadecane (HXN), 2,2,4,4,6,8,8-heptamethylnonane (HMN......), cyclohexane(CHX) and toluene developed in Part I are applied in this work. They are combined to produce two different versions of multi-component diesel surrogate models in the form of MCDS1 (HXN + HMN)and MCDS2 (HXN + HMN + toluene + CHX). The integrated mechanisms are then comprehensively validated in zero...... fuel model for diesel fuels with CN values ranging from 15 to100. It also shows that MCDS2 is a more appropriate surrogate model for fuels with aromatics and cyclo-paraffinic contents, particularly when soot calculation is of main interest....

  19. Validation of Hadronic Models in GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Koi, Tatsumi; Wright, Dennis H.; /SLAC; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; /CERN; Heikkinen, Aatos; /Helsinki Inst. of Phys.; Truscott,; Lei, Fan; /QinetiQ; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  20. Validation of hadronic models in GEANT4

    CERN Document Server

    Koi, Tatsumi; Folger, Günter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Pete; Lei, Fan; Wellisch, Hans-Peter

    2007-01-01

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin-target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  1. A Decision Support System (GesCoN for Managing Fertigation in Vegetable Crops. Part IIModel calibration and validation under different environmental growing conditions on field grown tomato

    Directory of Open Access Journals (Sweden)

    Giulia eConversa

    2015-07-01

    Full Text Available The GesCoN model was evaluated for its capability to simulate growth, nitrogen uptake and productivity of open field tomato grown under different environmental and cultural conditions. Five datasets collected from experimental trials carried out in Foggia (IT were used for calibration and 13 datasets collected from trials conducted in Foggia, Perugia (IT and Florida (USA were used for validation. The goodness of fitting was performed by comparing the observed and simulated shoot dry weight (SDW and N crop uptake during crop seasons, total dry weight (TDW, N uptake and fresh yield (TFY. In SDW model calibration, the relative RMSE values fell within the good 10 to 15% range, percent BIAS (PBIAS ranged between -11.5% and 7.4%. The Nash-Sutcliffe efficiency (NSE was very close to the optimal value 1. In the N uptake calibration RRMSE and PBIAS were very low(7%, and -1.78, respectively and NSE close to 1. The validation of SDW (RRMSE=16.7%; NSE=0.96 and N uptake (RRMSE=16.8%; NSE=0.96 showed the good accuracy of GesCoN. A model under- or overestimation of the SDW and N uptake occurred when higher or a lower N rates and/or a more or less efficient system were used compared to the calibration trial. The in-season adjustment, using the SDWcheck procedure, greatly improved model simulations both in the calibration and in the validation phases. The TFY prediction was quite good except in Florida, where a large overestimation (+16% was linked to a different harvest index (0.53 compared the cultivars used for model calibration and validation in Italian areas. The soil water content at the 10-30 cm depth appears to be well simulated by the software, and the GesCoN proved to be able to adaptively control potential yield and DW accumulation under limited N soil availability scenarios and consequently to modify fertilizer application. The DSSwell simulate SDW accumulation and N uptake of different tomato genotypes grown under Mediterranean and subtropical

  2. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  3. Model validation in soft systems practice

    Energy Technology Data Exchange (ETDEWEB)

    Checkland, P. [Univ. of Lancaster (United Kingdom)

    1995-03-01

    The concept of `a model` usually evokes the connotation `model of part of the real world`. That is an almost automatic response. It makes sense especially in relation to the way the concept has been developed and used in natural science. Classical operational research (OR), with its scientific aspirations, and systems engineering, use the concept in the same way and in addition use models as surrogates for the real world, on which experimentation is cheap. In these fields the key feature of a model is representativeness. In soft systems methodology (SSM) models are not of part of the world; they are only relevant to debate about the real world and are used in a cyclic learning process. The paper shows how the different concepts of validation in classical OR and SSM lead to a way of sharply defining the nature of `soft OR`. 21 refs.

  4. EXODUS II: A finite element data model

    Energy Technology Data Exchange (ETDEWEB)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  5. River water quality modelling: II

    DEFF Research Database (Denmark)

    Shanahan, P.; Henze, Mogens; Koncsos, L.

    1998-01-01

    The U.S. EPA QUAL2E model is currently the standard for river water quality modelling. While QUAL2E is adequate for the regulatory situation for which it was developed (the U.S. wasteload allocation process), there is a need for a more comprehensive framework for research and teaching. Moreover......, and to achieve robust model calibration. Mass balance problems arise from failure to account for mass in the sediment as well as in the water column and due to the fundamental imprecision of BOD as a state variable. (C) 1998 IAWQ Published by Elsevier Science Ltd. All rights reserved....

  6. Ship Grounding on Rock - II. Validation and Application

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1997-01-01

    with errors less than 10%. The rock penetration to fracture is predictedwith errors of 10-15%. The sensitivity to uncertain input parameters is discussed. Analysis of an accidental grounding that was recorded in 1975, also shows that the theoretical model canreproduce the observed damage. Finally...

  7. Information systems validation using formal models

    Directory of Open Access Journals (Sweden)

    Azadeh Sarram

    2014-03-01

    Full Text Available During the past few years, there has been growing interest to use unified modeling language (UML to consider the functional requirements. However, lacking a tool to detect the accuracy and the logic of diagrams in this language makes a formal model indispensable. In this study, conversion of primary UML model of a system to a colored Petri net has been accomplished in order to examine the precision of the model. For this purpose, first the definition of priority and implementation tags for UML activity diagram are provided; then it is turned into colored Petri net. Second, the proposed model provides translated tags in terms of net transitions and some monitoring are used to control the system characteristics. Finally, an executable model of UML activity diagram is provided so that the designer could simulate the model by using the simulation results to detect and to refine the problems of the model. In addition, by checking the results, we find out the proposed method enhances authenticity and accuracy of early models and the ratio of system validation increases compared with previous methods.

  8. Bayesian structural equation modeling method for hierarchical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu

    2009-04-15

    A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.

  9. Supo Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Wass, Alexander Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-14

    This report describes the continuation of the Computational Fluid Dynamics (CFD) model of the Supo cooling system described in the report, Supo Thermal Model Development1, by Cynthia Buechler. The goal for this report is to estimate the natural convection heat transfer coefficient (HTC) of the system using the CFD results and to compare those results to remaining past operational data. Also, the correlation for determining radiolytic gas bubble size is reevaluated using the larger simulation sample size. The background, solution vessel geometry, mesh, material properties, and boundary conditions are developed in the same manner as the previous report. Although, the material properties and boundary conditions are determined using the appropriate experiment results for each individual power level.

  10. Cyclopentane combustion. Part II. Ignition delay measurements and mechanism validation

    KAUST Repository

    Rachidi, Mariam El

    2017-06-12

    This study reports cyclopentane ignition delay measurements over a wide range of conditions. The measurements were obtained using two shock tubes and a rapid compression machine, and were used to test a detailed low- and high-temperature mechanism of cyclopentane oxidation that was presented in part I of this study (Al Rashidi et al., 2017). The ignition delay times of cyclopentane/air mixtures were measured over the temperature range of 650–1350K at pressures of 20 and 40atm and equivalence ratios of 0.5, 1.0 and 2.0. The ignition delay times simulated using the detailed chemical kinetic model of cyclopentane oxidation show very good agreement with the experimental measurements, as well as with the cyclopentane ignition and flame speed data available in the literature. The agreement is significantly improved compared to previous models developed and investigated at higher temperatures. Reaction path and sensitivity analyses were performed to provide insights into the ignition-controlling chemistry at low, intermediate and high temperatures. The results obtained in this study confirm that cycloalkanes are less reactive than their non-cyclic counterparts. Moreover, cyclopentane, a high octane number and high octane sensitivity fuel, exhibits minimal low-temperature chemistry and is considerably less reactive than cyclohexane. This study presents the first experimental low-temperature ignition delay data of cyclopentane, a potential fuel-blending component of particular interest due to its desirable antiknock characteristics.

  11. The Mg II index for upper atmosphere modelling

    Directory of Open Access Journals (Sweden)

    G. Thuillier

    Full Text Available The solar radio flux at 10.7 cm has been used in upper atmosphere density modelling because of its correlation with EUV radiation and its long and complete observational record. A proxy, the Mg II index, for the solar chromospheric activity has been derived by Heath and Schlesinger (1986 from Nimbus-7 data. This index allows one to describe the changes occurring in solar-activity in the UV Sun spectral irradiance. The use of this new proxy in upper atmosphere density modelling will be considered. First, this is supported by the 99.9% correlation between the solar radio flux (F10.7 and the Mg II index over a period of 19 years with, however, large differences on time scales of days to months. Secondly, correlation between EUV emissions and the Mg II index has been shown recently, suggesting that this last index may also be used to describe the EUV variations. Using the same density dataset, a model was first run with the F10.7 index as a solar forcing function and second, with the Mg II index. Comparison of their respective predictions to partial density data showed a 3–8% higher precision when the modelling uses the Mg II index rather than F10.7. An external validation, by means of orbit computation, resulted in a 20–40% smaller RMS of the tracking residuals. A density dataset spanning an entire solar cycle, together with Mg II data, is required to construct an accurate, unbiased as possible density model.

    Key words. Atmospheric composition and structure (middle atmosphere – composition and chemistry; thermosphere – composition and chemistry – History of geophysics (atmospheric sciences

  12. Model validation of channel zapping quality

    OpenAIRE

    Kooij, R.; Nicolai, F.; Ahmed, K.; Brunnström, K.

    2009-01-01

    In an earlier paper we showed, that perceived quality of channel zapping is related to the perceived quality of download time of web browsing, as suggested by ITU-T Rec.G.1030. We showed this by performing subjective tests resulting in an excellent fit with a 0.99 correlation. This was what we call a lean forward experiment and gave the rule of thumb result that the zapping time must be less than 0.43 sec to be good ( > 3.5 on the MOS scale). To validate the model we have done new subjective ...

  13. Belbin Revisited: The Construct Validity of the Interplace II Team Role Instrument

    NARCIS (Netherlands)

    D. van Dierendonck (Dirk); R. Groen (Rob)

    2008-01-01

    textabstractIn the present study the construct validity of the revised edition of the Belbin Team Roles measure, the so-called Interplace II program, is tested. Three parallel parts were used to determine someone’s team roles. The sample included 1434 persons who were asked to fill out the self-perc

  14. Concurrent and Predictive Validity of the Phelps Kindergarten Readiness Scale-II

    Science.gov (United States)

    Duncan, Jennifer; Rafter, Erin M.

    2005-01-01

    The purpose of this research was to establish the concurrent and predictive validity of the Phelps Kindergarten Readiness Scale, Second Edition (PKRS-II; L. Phelps, 2003). Seventy-four kindergarten students of diverse ethnic backgrounds enrolled in a northeastern suburban school participated in the study. The concurrent administration of the…

  15. Validation of the Offending-Related Attitudes Questionnaire of CRIME-PICS II Scale (Chinese)

    Science.gov (United States)

    Chui, Wing Hong; Wu, Joseph; Kwok, Yan Yuen; Liu, Liu

    2017-01-01

    This study examined the factor structure, reliability, and validity of the first part of the Chinese version of the CRIME-PICS II Scale, a self-administrated instrument assessing offending-related attitudes. Data were collected from three samples: male Hong Kong young offenders, female Mainland Chinese prisoners, and Hong Kong college students.…

  16. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    Science.gov (United States)

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…

  17. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  18. Doubtful outcome of the validation of the Rome II questionnaire: validation of a symptom based diagnostic tool

    Directory of Open Access Journals (Sweden)

    Nylin Henry BO

    2009-12-01

    Full Text Available Abstract Background Questionnaires are used in research and clinical practice. For gastrointestinal complaints the Rome II questionnaire is internationally known but not validated. The aim of this study was to validate a printed and a computerized version of Rome II, translated into Swedish. Results from various analyses are reported. Methods Volunteers from a population based colonoscopy study were included (n = 1011, together with patients seeking general practice (n = 45 and patients visiting a gastrointestinal specialists' clinic (n = 67. The questionnaire consists of 38 questions concerning gastrointestinal symptoms and complaints. Diagnoses are made after a special code. Our validation included analyses of the translation, feasibility, predictability, reproducibility and reliability. Kappa values and overall agreement were measured. The factor structures were confirmed using a principal component analysis and Cronbach's alpha was used to test the internal consistency. Results and Discussion Translation and back translation showed good agreement. The questionnaire was easy to understand and use. The reproducibility test showed kappa values of 0.60 for GERS, 0.52 for FD, and 0.47 for IBS. Kappa values and overall agreement for the predictability when the diagnoses by the questionnaire were compared to the diagnoses by the clinician were 0.26 and 90% for GERS, 0.18 and 85% for FD, and 0.49 and 86% for IBS. Corresponding figures for the agreement between the printed and the digital version were 0.50 and 92% for GERS, 0.64 and 95% for FD, and 0.76 and 95% for IBS. Cronbach's alpha coefficient for GERS was 0.75 with a span per item of 0.71 to 0.76. For FD the figures were 0.68 and 0.54 to 0.70 and for IBS 0.61 and 0.56 to 0.66. The Rome II questionnaire has never been thoroughly validated before even if diagnoses made by the Rome criteria have been compared to diagnoses made in clinical practice. Conclusion The accuracy of the Swedish version of

  19. Kinetic modeling of light limitation and sulfur deprivation effects in the induction of hydrogen production with Chlamydomonas reinhardtii. Part II: Definition of model-based protocols and experimental validation.

    Science.gov (United States)

    Degrenne, B; Pruvost, J; Titica, M; Takache, H; Legrand, J

    2011-10-01

    Photosynthetic hydrogen production under light by the green microalga Chlamydomonas reinhardtii was investigated in a torus-shaped PBR in sulfur-deprived conditions. Culture conditions, represented by the dry biomass concentration of the inoculum, sulfate concentration, and incident photon flux density (PFD), were optimized based on a previously published model (Fouchard et al., 2009. Biotechnol Bioeng 102:232-245). This allowed a strictly autotrophic production, whereas the sulfur-deprived protocol is usually applied in photoheterotrophic conditions. Experimental results combined with additional information from kinetic simulations emphasize effects of sulfur deprivation and light attenuation in the PBR in inducing anoxia and hydrogen production. A broad range of PFD was tested (up to 500 µmol photons m(-2) s(-1) ). Maximum hydrogen productivities were 1.0 ± 0.2 mL H₂ /h/L (or 25 ± 5 mL H₂ /m(2) h) and 3.1 mL ± 0.4 H₂ /h L (or 77.5 ± 10 mL H₂ /m(2) h), at 110 and 500 µmol photons m(-2) s(-1) , respectively. These values approached a maximum specific productivity of approximately 1.9 mL ± 0.4 H₂ /h/g of biomass dry weight, clearly indicative of a limitation in cell capacity to produce hydrogen. The efficiency of the process and further optimizations are discussed.

  20. Validation of the multimedia version of the RDC/TMD axis II questionnaire in Portuguese

    Directory of Open Access Journals (Sweden)

    Ricardo Figueiredo Cavalcanti

    2010-06-01

    Full Text Available OBJECTIVE: The aim of the study was to validate the multimedia version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD Axis II Questionnaire in Portuguese language. MATERIAL AND METHODS: The sample comprised 30 patients with signs and symptoms of temporomandibular disorders (TMD, evaluated at the Orofacial Pain Control Center of the Dental School of the University of Pernambuco, Brazil, between April and June 2006. Data collection was performed using the following instruments: Simplifed Anamnestic Index (SAI and RDC/TMD Axis II written version and multimedia version. The validation process consisted of analyzing the internal consistency of the scales. Concurrent and convergent validity were evaluated by the Spearman's rank correlation. In addition, test and analysis of reproducibility by the Kappa weighted statistical test and Spearman's rank correlation test were performed. RESULTS: The multimedia version of the RDC/TMD Axis II questionnaire in Portuguese was considered consistent (Crombrach alpha = 0.94, reproducible (Spearman 0.670 to 0.913, p<0.01 and valid (p<0.01. CONCLUSION: The questionnaire showed valid and reproducible results, and represents an instrument of practical application in epidemiological studies of TMD in the Brazilian population.

  1. Validation of the filament winding process model

    Science.gov (United States)

    Calius, Emilo P.; Springer, George S.; Wilson, Brian A.; Hanson, R. Scott

    1987-01-01

    Tests were performed toward validating the WIND model developed previously for simulating the filament winding of composite cylinders. In these tests two 24 in. long, 8 in. diam and 0.285 in. thick cylinders, made of IM-6G fibers and HBRF-55 resin, were wound at + or - 45 deg angle on steel mandrels. The temperatures on the inner and outer surfaces and inside the composite cylinders were recorded during oven cure. The temperatures inside the cylinders were also calculated by the WIND model. The measured and calculated temperatures were then compared. In addition, the degree of cure and resin viscosity distributions inside the cylinders were calculated for the conditions which existed in the tests.

  2. Assessment model validity document FARF31

    Energy Technology Data Exchange (ETDEWEB)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria [Kemakta Konsult AB, Stockholm (Sweden)

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  3. [Catalonia's primary healthcare accreditation model: a valid model].

    Science.gov (United States)

    Davins, Josep; Gens, Montserrat; Pareja, Clara; Guzmán, Ramón; Marquet, Roser; Vallès, Roser

    2014-07-01

    There are few experiences of accreditation models validated by primary care teams (EAP). The aim of this study was to detail the process of design, development, and subsequent validation of the consensus EAP accreditation model of Catalonia. An Operating Committee of the Health Department of Catalonia revised models proposed by the European Foundation for Quality Management, the Joint Commission International and the Institut Català de la Salut and proposed 628 essential standards to the technical group (25 experts in primary care and quality of care), to establish consensus standards. The consensus document was piloted in 30 EAP for the purpose of validating the contents, testing standards and identifying evidence. Finally, we did a survey to assess acceptance and validation of the document. The Technical Group agreed on a total of 414 essential standards. The pilot selected a total of 379. Mean compliance with the standards of the final document in the 30 EAP was 70.4%. The standards results were the worst fulfilment percentage. The survey target that 83% of the EAP found it useful and 78% found the content of the accreditation manual suitable as a tool to assess the quality of the EAP, and identify opportunities for improvement. On the downside they highlighted its complexity and laboriousness. We have a model that fits the reality of the EAP, and covers all relevant issues for the functioning of an excellent EAP. The model developed in Catalonia is a model for easy understanding.

  4. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  5. Examining the structure, reliability, and validity of the Chinese personal growth initiative scale-II: evidence for the importance of intentional self-change among Chinese.

    Science.gov (United States)

    Yang, Hongfei; Chang, Edward C

    2014-01-01

    We examined the factor structure, reliability, and validity of the Chinese version of the Personal Growth Initiative Scale-II (CPGIS-II) using data from a sample of 927 Chinese university students. Consistent with previous findings, confirmatory factor analyses supported a 4-factor model of the CPGIS-II. Reliability analyses indicated that the 4 CPGIS-II subscales, namely Readiness for Change, Planfulness, Using Resources, and Intentional Behavior, demonstrated good internal consistency reliability and adequate test-retest reliability across a 4-week period. In addition, evidence for convergent and incremental validity was found in relation to measures of positive and negative psychological adjustment. Finally, results of hierarchical regression analyses indicated that the 4 personal growth initiative dimensions, especially planfulness, accounted for additional unique variance in psychological adjustment beyond resilience. Some implications for using the CPGIS-II in Chinese are discussed.

  6. Unit testing, model validation, and biological simulation

    Science.gov (United States)

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  7. Photon Number Conserving Models of H II Bubbles during Reionization

    CERN Document Server

    Paranjape, Aseem; Padmanabhan, Hamsa

    2015-01-01

    Traditional excursion set based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion set approach (already recognised in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into halos. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore...

  8. Full-Scale Cookoff Model Validation Experiments

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  9. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  10. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.S.; Beers, T.C.; Sivarani, T.; Johnson, J.A.; An, D.; Wilhelm, R.; Prieto, C.Allende; Koesterke, L.; Re Fiorentin, P.; Bailer-Jones, C.A.L.; Norris, J.E.

    2007-10-01

    The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.

  11. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  12. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  13. Reliability and validity of the test of gross motor development-II in Korean preschool children: applying AHP.

    Science.gov (United States)

    Kim, Chung-Il; Han, Dong-Wook; Park, Il-Hyeok

    2014-04-01

    The Test of Gross Motor Development-II (TGMD-II) is a frequently used assessment tool for measuring motor ability. The purpose of this study is to investigate the reliability and validity of TGMD-II's weighting scores (by comparing pre-weighted TGMD-II scores with post ones) as well as examine applicability of the TGMD-II on Korean preschool children. A total of 121 Korean children (three kindergartens) participated in this study. There were 65 preschoolers who were 5-years-old (37 boys and 28 girls) and 56 preschoolers who were 6-years-old (34 boys and 22 girls). For internal consistency, reliability, and construct validity, only one researcher evaluated all of the children using the TGMD-II in the following areas: running; galloping; sliding; hopping; leaping; horizontal jumping; overhand throwing; underhand rolling; striking a stationary ball; stationary dribbling; kicking; and catching. For concurrent validity, the evaluator measured physical fitness (strength, flexibility, power, agility, endurance, and balance). The key findings were as follows: first, the reliability coefficient and the validity coefficient between pre-weighted and post-weighted TGMD-II scores were quite similar. Second, the research showed adequate reliability and validity of the TGMD-II for Korean preschool children. The TGMD-II is a proper instrument to test Korean children's motor development. Yet, applying relative weighting on the TGMD-II should be a point of consideration.

  14. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  15. Control of uncertain systems by feedback linearization with neural networks augmentation. Part II. Controller validation by numerical simulation

    Directory of Open Access Journals (Sweden)

    Adrian TOADER

    2010-09-01

    Full Text Available The paper was conceived in two parts. Part I, previously published in this journal, highlighted the main steps of adaptive output feedback control for non-affine uncertain systems, having a known relative degree. The main paradigm of this approach was the feedback linearization (dynamic inversion with neural network augmentation. Meanwhile, based on new contributions of the authors, a new paradigm, that of robust servomechanism problem solution, has been added to the controller architecture. The current Part II of the paper presents the validation of the controller hereby obtained by using the longitudinal channel of a hovering VTOL-type aircraft as mathematical model.

  16. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  17. Empirical data validation for model building

    Science.gov (United States)

    Kazarian, Aram

    2008-03-01

    Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining

  18. Model validation of channel zapping quality

    Science.gov (United States)

    Kooij, Robert; Nicolai, Floris; Ahmed, Kamal; Brunnström, Kjell

    2009-02-01

    In an earlier paper we showed, that perceived quality of channel zapping is related to the perceived quality of download time of web browsing, as suggested by ITU-T Rec.G.1030. We showed this by performing subjective tests resulting in an excellent fit with a 0.99 correlation. This was what we call a lean forward experiment and gave the rule of thumb result that the zapping time must be less than 0.43 sec to be good ( > 3.5 on the MOS scale). To validate the model we have done new subjective experiments. These experiments included lean backwards zapping i.e. sitting in a sofa with a remote control. The subjects are more forgiving in this case and the requirement could be relaxed to 0.67 sec. We also conducted subjective experiments where the zapping times are varying. We found that the MOS rating decreases if zapping delay times are varying. In our experiments we assumed uniformly distributed delays, where the variance cannot be larger than the mean delay. We found that in order to obtain a MOS rating of at least 3.5, that the maximum allowed variance, and thus also the maximum allowed mean zapping delay, is 0.46 sec.

  19. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  20. Validity of covariance models for the analysis of geographical variation

    DEFF Research Database (Denmark)

    Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio

    2014-01-01

    attention lately and show that the conditions under which they are valid mathematical models have been overlooked so far. 3. We provide rigorous results for the construction of valid covariance models in this family. 4. We also outline how to construct alternative covariance models for the analysis...

  1. SAGE III aerosol extinction validation in the Arctic winter: comparisons with SAGE II and POAM III

    Directory of Open Access Journals (Sweden)

    L. W. Thomason

    2006-11-01

    Full Text Available The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10–20% bias at both wavelengths. In addition, the 452 to 1020 nm extinction ratio shows a consistent bias of ~30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  2. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    Science.gov (United States)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  3. Line emission from H II blister models

    Science.gov (United States)

    Rubin, R. H.

    1984-01-01

    Numerical techniques to calculate the thermal and geometric properties of line emission from H II 'blister' regions are presented. It is assumed that the density distributions of the H II regions are a function of two dimensions, with rotational symmetry specifying the shape in three-dimensions. The thermal and ionization equilibrium equations of the problem are solved by spherical modeling, and a spherical sector approximation is used to simplify the three-dimensional treatment of diffuse ionizing radiation. The global properties of H II 'blister' regions near the edges of a molecular cloud are simulated by means of the geometry/density distribution, and the results are compared with observational data. It is shown that there is a monotonic increase of peak surface brightness from the i = 0 deg (pole-on) observational position to the i = 90 deg (edge-on) position. The enhancement of the line peak intensity from the edge-on to the pole-on positions is found to depend on the density, stratification, ionization, and electron temperature weighting. It is found that as i increases, the position of peak line brightness of the lower excitation species is displaced to the high-density side of the high excitation species.

  4. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  5. Validity and reliability of the Turkish version of the Essentials of Magnetism Scale (EOM II).

    Science.gov (United States)

    Yıldırım, D; Kısa, S; Hisar, F

    2012-12-01

    To test the validity and reliability of the Turkish version of the Essentials of Magnetism II Scale (EOMII) for use by staff nurses as being essential to quality patient care. This study consisted of 385 nurses from four joint commission internationally accredited hospitals. The EOMII scale was translated using a back-translation technique. The statistical analysis was carried out using Cronbach's alpha to test the internal consistency of the scale, while the factor analysis was carried out using the principal component analysis together with the varimax rotation and Kaiser normalization to test its construct validity. The total mean scores of all the items of the scale were found to be 155.33 (minimum 77 - maximum 219) and the standard deviation was 29.45. All the items showed a statistically significant correlation (P high level of reliability. Cronbach's alpha consistencies in subgroups were between 0.87 and 0.70. In this study, job satisfaction and quality results show the sign of convergence as in the original scale, which shows that the scale has a high construct validity (P < 0.01). Transcultural differences in the quality of nursing services can only be compared with reliable and valid instruments. This study shows that the Turkish version of the EOMII scale is a valid and reliable instrument to assess the nurses' working environment and to provide quality patient care in Turkey. © 2012 The Authors. International Nursing Review © 2012 International Council of Nurses.

  6. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  7. SDG-based Model Validation in Chemical Process Simulation

    Institute of Scientific and Technical Information of China (English)

    张贝克; 许欣; 马昕; 吴重光

    2013-01-01

    Signed direct graph (SDG) theory provides algorithms and methods that can be applied directly to chemical process modeling and analysis to validate simulation models, and is a basis for the development of a soft-ware environment that can automate the validation activity. This paper is concentrated on the pretreatment of the model validation. We use the validation scenarios and standard sequences generated by well-established SDG model to validate the trends fitted from the simulation model. The results are helpful to find potential problems, as-sess possible bugs in the simulation model and solve the problem effectively. A case study on a simulation model of boiler is presented to demonstrate the effectiveness of this method.

  8. Model for Use of Sociometry to Validate Attitude Measures.

    Science.gov (United States)

    McGuiness, Thomas P.; Stank, Peggy L.

    A study concerning the development and validation of an instrument intended to measure Goal II of quality education is presented. This goal is that quality education should help every child acquire understanding and appreciation of persons belonging to social, cultural and ethnic groups different from his own. The rationale for measurement…

  9. A Stochastic Closure for Two-Moment Bulk Microphysics of Warm Clouds: Part II, Validation

    CERN Document Server

    Collins, David

    2016-01-01

    The representation of clouds and associated processes of rain and snow formation remains one of the major uncertainties in climate and weather prediction models. In a companion paper (Part I), we systematically derived a two moment bulk cloud microphysics model for collision and coalescence in warm rain based on the kinetic coalescence equation (KCE) and used stochastic approximations to close the higher order moment terms, and do so independently of the collision kernel. Conservation of mass and consistency of droplet number concentration of the evolving cloud properties were combined with numerical simulations to reduce the parametrization problem to three key parameters. Here, we constrain these three parameters based on the physics of collision and coalescence resulting in a "region of validity." Furthermore, we theoretically validate the new bulk model by deriving a subset of the "region of validity" that contains stochastic parameters that skillfully reproduces an existing model based on an a priori dro...

  10. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  12. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... are continuous or discrete. With both simulated data, and a real data set related to geolocation of seals, we demonstrate both the potential and the limitations of the techniques. Our results fill a need for convenient methods for validating a state space model, or alternatively, rejecting it while indicating...

  13. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    Science.gov (United States)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  14. Prospects and problems for standardizing model validation in systems biology.

    Science.gov (United States)

    Gross, Fridolin; MacLeod, Miles

    2017-10-01

    There are currently no widely shared criteria by which to assess the validity of computational models in systems biology. Here we discuss the feasibility and desirability of implementing validation standards for modeling. Having such a standard would facilitate journal review, interdisciplinary collaboration, model exchange, and be especially relevant for applications close to medical practice. However, even though the production of predictively valid models is considered a central goal, in practice modeling in systems biology employs a variety of model structures and model-building practices. These serve a variety of purposes, many of which are heuristic and do not seem to require strict validation criteria and may even be restricted by them. Moreover, given the current situation in systems biology, implementing a validation standard would face serious technical obstacles mostly due to the quality of available empirical data. We advocate a cautious approach to standardization. However even though rigorous standardization seems premature at this point, raising the issue helps us develop better insights into the practices of systems biology and the technical problems modelers face validating models. Further it allows us to identify certain technical validation issues which hold regardless of modeling context and purpose. Informal guidelines could in fact play a role in the field by helping modelers handle these. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    Science.gov (United States)

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population.

  16. Validation of aerosol measurements by the satellite sensors SAM II and Sage

    Science.gov (United States)

    Russell, P. B.; Mccormick, M. P.; Swissler, T. J.

    1982-01-01

    A global data base on stratospheric aerosols has been obtained with the aid of the sensors SAM II and SAGE since the satellites carrying the sensors were launched in October 1978 and Feburary 1979, respectively. Several major comparative experiments have been conducted to acquire correlative data for validating the extinction profiles measured by these satellite sensors. The present investigation has the objective to present results from the first two of these experiments, which were conducted at Sondrestorm, Greenland, in November 1978, and at Poker Flat, Alaska, in July 1979. In both experiments, extinction profiles derived from the correlative sensors (dustsonde, lidar, filter, wire impactor) agreed, to within their respective uncertainties, with the extinction profiles measured by SAM II and SAGE (which in turn agreed with each other).

  17. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  18. Prospects and problems for standardizing model validation in systems biology

    NARCIS (Netherlands)

    Gross, Fridolin; MacLeod, Miles Alexander James

    2017-01-01

    There are currently no widely shared criteria by which to assess the validity of computational models in systems biology. Here we discuss the feasibility and desirability of implementing validation standards for modeling. Having such a standard would facilitate journal review, interdisciplinary coll

  19. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment we

  20. Foundational Issues in Statistical Modeling: Statistical Model Specification and Validation

    Directory of Open Access Journals (Sweden)

    Aris Spanos

    2011-01-01

    Full Text Available Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically misspecified. Foisting one's favorite model on the data often yields estimated models which are both statistically and substantively misspecified, but one has no way to delineate between the two sources of error and apportion blame. The paper argues that the error statistical approach can address this Duhemian ambiguity by distinguishing between statistical and substantive premises and viewing empirical modeling in a piecemeal way with a view to delineate the various issues more effectively. It is also argued that Hendry's general to specific procedures does a much better job in model selection than the theory-driven and the Akaike-type procedures primary because of its error statistical underpinnings.

  1. Validation of a Conceptual Assessment Tool in E&M II

    CERN Document Server

    Ryan, Qing X; Baily, Charles; Pollock, Steven J

    2014-01-01

    As part of an ongoing project to investigate student learning in upper-division electrodynamics (E&M II), the PER research group at the University of Colorado Boulder has developed a tool to assess student conceptual understanding: the CURrENT (Colorado UppeR-division ElectrodyNamics Test). The result is an open-ended post-test diagnostic with 6 multi-part questions, an optional 3-question pretest, and an accompanying grading rubric. This instrument is motivated in part by our faculty-consensus learning goals, and is intended to help measure the effectiveness of transformed pedagogy. In addition, it provides insights into student thinking and student difficulties in the covered topical areas. In this paper, we present preliminary measures of the validity and reliability of the instrument and scoring rubric. These include expert validation and student interviews, inter-rater reliability measures, and classical test statistics.

  2. Belbin Revisited: The Construct Validity of the Interplace II Team Role Instrument

    OpenAIRE

    Dierendonck, Dirk; Groen, Rob

    2008-01-01

    textabstractIn the present study the construct validity of the revised edition of the Belbin Team Roles measure, the so-called Interplace II program, is tested. Three parallel parts were used to determine someone’s team roles. The sample included 1434 persons who were asked to fill out the self-perception inventory and the self-perception assessment, whereas the observer assessment sheet was filled out by at least four observers. The inter-rater reliability appeared to be satisfactory across ...

  3. Validation of SAGE II aerosol measurements by comparison with correlative sensors

    Science.gov (United States)

    Swissler, T. J.

    1986-01-01

    The SAGE II limb-scanning radiometer carried on the Earth Radiation Budget Satellite functions at wavelengths of 0.385, 0.45, 0.525, and 1.02 microns to identify vertical profiles of aerosol density by atmospheric extinction measurements from cloud tops upward. The data are being validated by correlating the satellite data with data gathered with, e.g., lidar, sunphotometer, and dustsonde instruments. Work thus far has shown that the 1 micron measurements from the ground and satellite are highly correlated and are therefore accurate to within measurement uncertainty.

  4. Toward Validation of the Diagnostic-Prescriptive Model

    Science.gov (United States)

    Ysseldyke, James E.; Sabatino, David A.

    1973-01-01

    Criticized are recent research efforts to validate the diagnostic prescriptive model of remediating learning disabilities, and proposed is a 6-step psychoeducational model designed to ascertain links between behavioral differences and instructional outcomes. (DB)

  5. Models for Validation of Prior Learning (VPL)

    DEFF Research Database (Denmark)

    Ehlers, Søren

    would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....

  6. Modeling and Simulation Behavior Validation Methodology and Extension Model Validation for the Individual Soldier

    Science.gov (United States)

    2015-03-01

    Historical Methods The three historical methods of validation are rationalism, empiricism , and positive economics. Rationalism requires that... Empiricism requires every assumption and outcome to be empirically validated. Positive economics requires only that the model’s outcome(s) be correct...historical methods of rationalism, empiricism , and positive economics into a multistage process of validation. This validation method consists of (1

  7. Spectral modeling of Type II SNe

    Science.gov (United States)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  8. Validation of the Millon Clinical Multiaxial Inventory for Axis II disorders: does it meet the Daubert standard?

    Science.gov (United States)

    Rogers, R; Salekin, R T; Sewell, K W

    1999-08-01

    Relevant to forensic practice, the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) established the boundaries for the admissibility of scientific evidence that take into account its trustworthiness as assessed via evidentiary reliability. In conducting forensic evaluations, psychologists and other mental health professionals must be able to offer valid diagnoses, including Axis II disorders. The most widely available measure of personality disorders is the Million Clinical Multiaxial Inventory (MCMI) and its subsequent revisions (MCMI-II and MCMI-III). We address the critical question, "Do the MCMI-II and MCMI-III meet the requirements of Daubert?" Fundamental problems in the scientific validity and error rates for MCMI-III appear to preclude its admissibility under Daubert for the assessment of Axis II disorders. We address the construct validity for the MCMI and MCMI-II via a meta-analysis of 33 studies. The resulting multitrait-multimethod approach allowed us to address their convergent and discriminant validity through method effects (Marsh, 1990). With reference to Daubert, the results suggest a circumscribed use for the MCMI-II with good evidence of construct validity for Avoidant, Schizotypal, and Borderline personality disorders.

  9. Validation of Numerical Shallow Water Models for Tidal Lagoons

    Energy Technology Data Exchange (ETDEWEB)

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  10. Using virtual reality to validate system models

    Energy Technology Data Exchange (ETDEWEB)

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  11. The development and validation of a CT-based radiomics signature for the preoperative discrimination of stage I-II and stage III-IV colorectal cancer

    Science.gov (United States)

    He, Lan; Chen, Xin; Ma, Zelan; Dong, Di; Tian, Jie; Liang, Changhong; Liu, Zaiyi

    2016-01-01

    Objectives To investigative the predictive ability of radiomics signature for preoperative staging (I-IIvs.III-IV) of primary colorectal cancer (CRC). Methods This study consisted of 494 consecutive patients (training dataset: n=286; validation cohort, n=208) with stage I–IV CRC. A radiomics signature was generated using LASSO logistic regression model. Association between radiomics signature and CRC staging was explored. The classification performance of the radiomics signature was explored with respect to the receiver operating characteristics(ROC) curve. Results The 16-feature-based radiomics signature was an independent predictor for staging of CRC, which could successfully categorize CRC into stage I-II and III-IV (p <0.0001) in training and validation dataset. The median of radiomics signature of stage III-IV was higher than stage I-II in the training and validation dataset. As for the classification performance of the radiomics signature in CRC staging, the AUC was 0.792(95%CI:0.741-0.853) with sensitivity of 0.629 and specificity of 0.874. The signature in the validation dataset obtained an AUC of 0.708(95%CI:0.698-0.718) with sensitivity of 0.611 and specificity of 0.680. Conclusions A radiomics signature was developed and validated to be a significant predictor for discrimination of stage I-II from III-IV CRC, which may serve as a complementary tool for the preoperative tumor staging in CRC. PMID:27120787

  12. Extending Model Checking to Object Process Validation

    NARCIS (Netherlands)

    Rein, van H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent models

  13. Run II Analysis Framework and Intial Validation Studies for $H \\rightarrow ZZ^{*} \\rightarrow 4\\ell$ Analysis

    CERN Document Server

    Abidi, Syed Haider

    This undergraduate thesis focuses on the development of a user analysis framework for the ATLAS Run 2 $H \\rightarrow ZZ^{*} \\rightarrow 4\\ell$ analysis. The Run 1 analysis model is investigated and requirements and constraints for a new model are derived. Based on these and the new ATLAS software upgrades, the design of a new code base is outlined and implemented. Initial validation studies using this framework are also presented.

  14. Cross-validation criteria for SETAR model selection

    NARCIS (Netherlands)

    de Gooijer, J.G.

    2001-01-01

    Three cross-validation criteria, denoted C, C_c, and C_u are proposed for selecting the orders of a self-exciting threshold autoregressive SETAR) model when both the delay and the threshold value are unknown. The derivatioon of C is within a natural cross-validation framework. The crietion C_c is si

  15. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    Science.gov (United States)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  16. Amendment to Validated dynamic flow model

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2011-01-01

    The purpose of WP2 is to establish flow models relating the wind speed at turbines in a farm. Until now, active control of power reference has not been included in these models as only data with standard operation has been available. In this report the first data series with power reference...... excitations from the Thanet farm are used for trying to update some of the models discussed in D2.5. Because of very limited amount of data only simple dynamic transfer function models can be obtained. The three obtained data series are somewhat different. Only the first data set seems to have the front...... turbine in undisturbed flow. For this data set both the multiplicative model and in particular the simple first order transfer function model can predict the down wind wind speed from upwind wind speed and loading....

  17. Gear Windage Modeling Progress - Experimental Validation Status

    Science.gov (United States)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  18. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect inc...

  19. EMMD-Prony approach for dynamic validation of simulation models

    Institute of Scientific and Technical Information of China (English)

    Ruiyang Bai

    2015-01-01

    Model validation and updating is critical to model credi-bility growth. In order to assess model credibility quantitatively and locate model error precisely, a new dynamic validation method based on extremum field mean mode decomposition (EMMD) and the Prony method is proposed in this paper. Firstly, complex dy-namic responses from models and real systems are processed into stationary components by EMMD. These components always have definite physical meanings which can be the evidence about rough model error location. Secondly, the Prony method is applied to identify the features of each EMMD component. Amplitude si-milarity, frequency similarity, damping similarity and phase simi-larity are defined to describe the similarity of dynamic responses. Then quantitative validation metrics are obtained based on the improved entropy weight and energy proportion. Precise model error location is realized based on the physical meanings of these features. The application of this method in aircraft control er design provides evidence about its feasibility and usability.

  20. Uncertainty Quantification and Validation for RANS Turbulence Models

    Science.gov (United States)

    Oliver, Todd; Moser, Robert

    2011-11-01

    Uncertainty quantification and validation procedures for RANS turbulence models are developed and applied. The procedures used here rely on a Bayesian view of probability. In particular, the uncertainty quantification methodology requires stochastic model development, model calibration, and model comparison, all of which are pursued using tools from Bayesian statistics. Model validation is also pursued in a probabilistic framework. The ideas and processes are demonstrated on a channel flow example. Specifically, a set of RANS models--including Baldwin-Lomax, Spalart-Allmaras, k- ɛ, k- ω, and v2- f--and uncertainty representations are analyzed using DNS data for fully-developed channel flow. Predictions of various quantities of interest and the validity (or invalidity) of the various models for making those predictions will be examined. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  1. Validation of a national hydrological model

    Science.gov (United States)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  2. Modeling interactions of Hg(II) and bauxitic soils.

    Science.gov (United States)

    Weerasooriya, Rohan; Tobschall, Heinz J; Bandara, Atula

    2007-11-01

    The adsorptive interactions of Hg(II) with gibbsite-rich soils (hereafter SOIL-g) were modeled by 1-pK surface complexation theory using charge distribution multi-site ion competition model (CD MUSIC) incorporating basic Stern layer model (BSM) to account for electrostatic effects. The model calibrations were performed for the experimental data of synthetic gibbsite-Hg(II) adsorption. When [NaNO(3)] > or = 0.01M, the Hg(II) adsorption density values, of gibbsite, Gamma(Hg(II)), showed a negligible variation with ionic strength. However, Gamma(Hg(II)) values show a marked variation with the [Cl(-)]. When [Cl(-)] > or = 0.01M, the Gamma(Hg(II)) values showed a significant reduction with the pH. The Hg(II) adsorption behavior in NaNO(3) was modeled assuming homogeneous solid surface. The introduction of high affinity sites, i.e., >Al(s)OH at a low concentration (typically about 0.045 sites nm(-2)) is required to model Hg(II) adsorption in NaCl. According to IR spectroscopic data, the bauxitic soil (SOIL-g) is characterized by gibbsite and bayerite. These mineral phases were not treated discretely in modeling of Hg(II) and soil interactions. The CD MUSIC/BSM model combination can be used to model Hg(II) adsorption on bauxitic soil. The role of organic matter seems to play a role on Hg(II) binding when pH>8. The Hg(II) adsorption in the presence of excess Cl(-) ions required the selection of high affinity sites in modeling.

  3. Toward a validation process for model based safety analysis

    OpenAIRE

    Adeline, Romain; Cardoso, Janette; Darfeuil, Pierre; Humbert, Sophie; Seguin, Christel

    2010-01-01

    Today, Model Based processes become more and more widespread to achieve the analysis of a system. However, there is no formal testing approach to ensure that the formal model is compliant with the real system. In the paper, we choose to study AltaRica model. We present a general process to well construct and validate an AltaRica formal model. The focus is made on this validation phase, i.e. verifying the compliance between the model and the real system. For it, the proposed process recommends...

  4. Construction and validation of detailed kinetic models for the combustion of gasoline surrogates; Construction et validation de modeles cinetiques detailles pour la combustion de melanges modeles des essences

    Energy Technology Data Exchange (ETDEWEB)

    Touchard, S.

    2005-10-15

    The irreversible reduction of oil resources, the CO{sub 2} emission control and the application of increasingly strict standards of pollutants emission lead the worldwide researchers to work to reduce the pollutants formation and to improve the engine yields, especially by using homogenous charge combustion of lean mixtures. The numerical simulation of fuel blends oxidation is an essential tool to study the influence of fuel formulation and motor conditions on auto-ignition and on pollutants emissions. The automatic generation helps to obtain detailed kinetic models, especially at low temperature, where the number of reactions quickly exceeds thousand. The main purpose of this study is the generation and the validation of detailed kinetic models for the oxidation of gasoline blends using the EXGAS software. This work has implied an improvement of computation rules for thermodynamic and kinetic data, those were validated by numerical simulation using CHEMKIN II softwares. A large part of this work has concerned the understanding of the low temperature oxidation chemistry of the C5 and larger alkenes. Low and high temperature mechanisms were proposed and validated for 1 pentene, 1-hexene, the binary mixtures containing 1 hexene/iso octane, 1 hexene/toluene, iso octane/toluene and the ternary mixture of 1 hexene/toluene/iso octane. Simulations were also done for propene, 1-butene and iso-octane with former models including the modifications proposed in this PhD work. If the generated models allowed us to simulate with a good agreement the auto-ignition delays of the studied molecules and blends, some uncertainties still remains for some reaction paths leading to the formation of cyclic products in the case of alkenes oxidation at low temperature. It would be also interesting to carry on this work for combustion models of gasoline blends at low temperature. (author)

  5. Comparison with CLPX II airborne data using DMRT model

    Science.gov (United States)

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  6. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  7. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  8. Photoionization models for giant h ii regions

    Directory of Open Access Journals (Sweden)

    Gra´zyna Stasi´nska

    2000-01-01

    Full Text Available Revisamos las fuentes de incertidumbre en los modelos de fotoionizaci on de regiones H II gigantes. Tambi en discutimos el problema de la temperatura electr onica a la luz de los ajustes de modelos en tres regiones H II gigantes.

  9. Validation of Model Forecasts of the Ambient Solar Wind

    Science.gov (United States)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  10. Traffic modelling validation of advanced driver assistance systems

    NARCIS (Netherlands)

    Tongeren, R. van; Gietelink, O.J.; Schutter, B. de; Verhaegen, M.

    2007-01-01

    This paper presents a microscopic traffic model for the validation of advanced driver assistance systems. This model describes single-lane traffic and is calibrated with data from a field operational test. To illustrate the use of the model, a Monte Carlo simulation of single-lane traffic scenarios

  11. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois;

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  12. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Germina [ORNL; Gauld, Ian C [ORNL

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  13. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    2007-01-01

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation us

  14. HYDRODYNAMICAL MODELS OF TYPE II-P SUPERNOVA LIGHT CURVES

    Directory of Open Access Journals (Sweden)

    M. C. Bersten

    2009-01-01

    Full Text Available We present progress in light curve models of type II-P supernovae (SNe II-P obtained using a newly devel- oped, one-dimensional hydrodynamic code. Using simple initial models (polytropes, we reproduced the global behavior of the observed light curves and we analyzed the sensitivity of the light curves to the variation of free parameters.

  15. Validation of Air Traffic Controller Workload Models

    Science.gov (United States)

    1979-09-01

    SAR) tapes dtirinq the data reduc- tion phase of the project. Kentron International Limited provided the software support for the oroject. This included... ETABS ) or to revised traffic control procedures. The models also can be used to verify productivity benefits after new configurations have been...col- lected and processed manually. A preliminary compari- son has been made between standard NAS Stage A and ETABS operations at Miami. 1.2

  16. Reduced-complexity modeling of braided rivers: Assessing model performance by sensitivity analysis, calibration, and validation

    Science.gov (United States)

    Ziliani, L.; Surian, N.; Coulthard, T. J.; Tarantola, S.

    2013-12-01

    paper addresses an important question of modeling stream dynamics: How may numerical models of braided stream morphodynamics be rigorously and objectively evaluated against a real case study? Using simulations from the Cellular Automaton Evolutionary Slope and River (CAESAR) reduced-complexity model (RCM) of a 33 km reach of a large gravel bed river (the Tagliamento River, Italy), this paper aims to (i) identify a sound strategy for calibration and validation of RCMs, (ii) investigate the effectiveness of multiperformance model assessments, (iii) assess the potential of using CAESAR at mesospatial and mesotemporal scales. The approach used has three main steps: first sensitivity analysis (using a screening method and a variance-based method), then calibration, and finally validation. This approach allowed us to analyze 12 input factors initially and then to focus calibration only on the factors identified as most important. Sensitivity analysis and calibration were performed on a 7.5 km subreach, using a hydrological time series of 20 months, while validation on the whole 33 km study reach over a period of 8 years (2001-2009). CAESAR was able to reproduce the macromorphological changes of the study reach and gave good results as for annual bed load sediment estimates which turned out to be consistent with measurements in other large gravel bed rivers but showed a poorer performance in reproducing the characteristics of the braided channel (e.g., braiding intensity). The approach developed in this study can be effectively applied in other similar RCM contexts, allowing the use of RCMs not only in an explorative manner but also in obtaining quantitative results and scenarios.

  17. Preliminary validation of the Spanish version of the Multiple Stimulus Types Ambiguity Tolerance Scale (MSTAT-II).

    Science.gov (United States)

    Arquero, José L; McLain, David L

    2010-05-01

    Despite widespread interest in ambiguity tolerance and other information-related individual differences, existing measures are conceptually dispersed and psychometrically weak. This paper presents the Spanish version of MSTAT-II, a short, stimulus-oriented, and psychometrically improved measure of an individual's orientation toward ambiguous stimuli. Results obtained reveal adequate reliability, validity, and temporal stability. These results support the use of MSTAT-II as an adequate measure of ambiguity tolerance.

  18. Experimental Validation of RELAP5 and TRACE5 for Licensing Studies of the Boron Injection System of Atucha II

    Directory of Open Access Journals (Sweden)

    Alejandro I. Lazarte

    2011-01-01

    Full Text Available This paper presents an experimental validation of RELAP5 and TRACE5 for licensing studies of the Atucha II-PHWR nuclear power plant. A scaled experimental facility, representing the boron injection system of Atucha II, was built. The system has a fundamental importance for loss of coolant accidents (LOCA and anticipated transients without scram (ATWS. The experiment consists of the discharge of a tank that represents the boron tank filled with air or a mixture of air-water onto a discharge tank that represents the moderator tank. Both tanks are connected by a pipe which includes a valve and an orifice plate to model the pressure losses due to the fittings in the real system. The pressure and water level measured in the tanks are compared with the RELAP5 and TRACE5 predictions. The codes predict the pressure in the tanks accurately. However, both codes overpredict the heat transfer in the boron tank air-water interface which produces a greater expansion of the air which leads to a small discrepancy in the boron tank level prediction.

  19. Experiments for foam model development and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  20. Spatial statistical modeling of shallow landslides—Validating predictions for different landslide inventories and rainfall events

    Science.gov (United States)

    von Ruette, Jonas; Papritz, Andreas; Lehmann, Peter; Rickli, Christian; Or, Dani

    2011-10-01

    Statistical models that exploit the correlation between landslide occurrence and geomorphic properties are often used to map the spatial occurrence of shallow landslides triggered by heavy rainfalls. In many landslide susceptibility studies, the true predictive power of the statistical model remains unknown because the predictions are not validated with independent data from other events or areas. This study validates statistical susceptibility predictions with independent test data. The spatial incidence of landslides, triggered by an extreme rainfall in a study area, was modeled by logistic regression. The fitted model was then used to generate susceptibility maps for another three study areas, for which event-based landslide inventories were also available. All the study areas lie in the northern foothills of the Swiss Alps. The landslides had been triggered by heavy rainfall either in 2002 or 2005. The validation was designed such that the first validation study area shared the geomorphology and the second the triggering rainfall event with the calibration study area. For the third validation study area, both geomorphology and rainfall were different. All explanatory variables were extracted for the logistic regression analysis from high-resolution digital elevation and surface models (2.5 m grid). The model fitted to the calibration data comprised four explanatory variables: (i) slope angle (effect of gravitational driving forces), (ii) vegetation type (grassland and forest; root reinforcement), (iii) planform curvature (convergent water flow paths), and (iv) contributing area (potential supply of water). The area under the Receiver Operating Characteristic (ROC) curve ( AUC) was used to quantify the predictive performance of the logistic regression model. The AUC values were computed for the susceptibility maps of the three validation study areas (validation AUC), the fitted susceptibility map of the calibration study area (apparent AUC: 0.80) and another

  1. Functional state modelling approach validation for yeast and bacteria cultivations

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  2. Functional state modelling approach validation for yeast and bacteria cultivations.

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  3. Validation of Orthorectified Interferometric Radar Imagery and Digital Elevation Models

    Science.gov (United States)

    Smith Charles M.

    2004-01-01

    This work was performed under NASA's Verification and Validation (V&V) Program as an independent check of data supplied by EarthWatch, Incorporated, through the Earth Science Enterprise Scientific Data Purchase (SDP) Program. This document serves as the basis of reporting results associated with validation of orthorectified interferometric interferometric radar imagery and digital elevation models (DEM). This validation covers all datasets provided under the first campaign (Central America & Virginia Beach) plus three earlier missions (Indonesia, Red River: and Denver) for a total of 13 missions.

  4. Validation of a Model of the Domino Effect?

    CERN Document Server

    Larham, Ron

    2008-01-01

    A recent paper proposing a model of the limiting speed of the domino effect is discussed with reference to its need and the need of models in general for validation against experimental data. It is shown that the proposed model diverges significantly from experimentally derived speed estimates over a significant range of domino spacing using data from the existing literature and this author's own measurements, hence if its use had had economic importance its use outside its range of validity could have led to loses of one sort or another to its users.

  5. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  6. Validation of an Efficient Outdoor Sound Propagation Model Using BEM

    DEFF Research Database (Denmark)

    Quirós-Alpera, S.; Henriquez, Vicente Cutanda; Jacobsen, Finn

    2001-01-01

    An approximate, simple and practical model for prediction of outdoor sound propagation exists based on ray theory, diffraction theory and Fresnel-zone considerations [1]. This model, which can predict sound propagation over non-flat terrain, has been validated for combinations of flat ground, hills...... and barriers, but it still needs to be validated for configurations that involve combinations of valleys and barriers. In order to do this a boundary element model has been implemented in MATLAB to serve as a reliable reference....

  7. Validation of a Model for Ice Formation around Finned Tubes

    Directory of Open Access Journals (Sweden)

    Kamal A. R. Ismai

    2016-09-01

    Full Text Available Phase change materials although attaractive option for thermal storage applications its main drawback is the slow thermal response during charging and discharging processes due to their low thermal conductivity. The present study validates a model developed by the authors some years ago on radial fins as a method to meliorate the thermal performance of PCM in horizontal storage system. The developed model for the radial finned tube is based on pure conduction, the enthalpy approach and was discretized by the finite difference method. Experiments were realized specifically to validate the model and its numerical predictions.

  8. PASTIS: Bayesian extrasolar planet validation II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    CERN Document Server

    Santerne, A; Almenara, J -M; Bouchy, F; Deleuil, M; Figueira, P; Hébrard, G; Moutou, C; Rodionov, S; Santos, N C

    2015-01-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anti-correlated with the radial velocity one, as in the case of stellar spots. In those cases, the full width half maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We r...

  9. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  10. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  11. Reverse electrodialysis : A validated process model for design and optimization

    NARCIS (Netherlands)

    Veerman, J.; Saakes, M.; Metz, S. J.; Harmsen, G. J.

    2011-01-01

    Reverse electrodialysis (RED) is a technology to generate electricity using the entropy of the mixing of sea and river water. A model is made of the RED process and validated experimentally. The model is used to design and optimize the RED process. It predicts very small differences between counter-

  12. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a network

  13. Measurements for validation of high voltage underground cable modelling

    DEFF Research Database (Denmark)

    Bak, Claus Leth; Gudmundsdottir, Unnur Stella; Wiechowski, Wojciech Tomasz

    2009-01-01

    This paper discusses studies concerning cable modelling for long high voltage AC cable lines. In investigating the possibilities of using long cables instead of overhead lines, the simulation results must be trustworthy. Therefore a model validation is of great importance. This paper describes...

  14. Model validation for karst flow using sandbox experiments

    Science.gov (United States)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  15. Composing, Analyzing and Validating Software Models

    Science.gov (United States)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  16. Validation of a terrestrial food chain model.

    Science.gov (United States)

    Travis, C C; Blaylock, B P

    1992-01-01

    An increasingly important topic in risk assessment is the estimation of human exposure to environmental pollutants through pathways other than inhalation. The Environmental Protection Agency (EPA) has recently developed a computerized methodology (EPA, 1990) to estimate indirect exposure to toxic pollutants from Municipal Waste Combuster emissions. This methodology estimates health risks from exposure to toxic pollutants from the terrestrial food chain (TFC), soil ingestion, drinking water ingestion, fish ingestion, and dermal absorption via soil and water. Of these, one of the most difficult to estimate is exposure through the food chain. This paper estimates the accuracy of the EPA methodology for estimating food chain contamination. To our knowledge, no data exist on measured concentrations of pollutants in food grown around Municipal Waste Incinerators, and few field-scale studies have been performed on the uptake of pollutants in the food chain. Therefore, to evaluate the EPA methodology, we compare actual measurements of background contaminant levels in food with estimates made using EPA's computerized methodology. Background levels of contaminants in air, water, and soil were used as input to the EPA food chain model to predict background levels of contaminants in food. These predicted values were then compared with the measured background contaminant levels. Comparisons were performed for dioxin, pentachlorophenol, polychlorinated biphenyls, benzene, benzo(a)pyrene, mercury, and lead.

  17. On the development and validation of QSAR models.

    Science.gov (United States)

    Gramatica, Paola

    2013-01-01

    The fundamental and more critical steps that are necessary for the development and validation of QSAR models are presented in this chapter as best practices in the field. These procedures are discussed in the context of predictive QSAR modelling that is focused on achieving models of the highest statistical quality and with external predictive power. The most important and most used statistical parameters needed to verify the real performances of QSAR models (of both linear regression and classification) are presented. Special emphasis is placed on the validation of models, both internally and externally, as well as on the need to define model applicability domains, which should be done when models are employed for the prediction of new external compounds.

  18. Validation of the Health-Promoting Lifestyle Profile II for Hispanic male truck drivers in the Southwest.

    Science.gov (United States)

    Mullins, Iris L; O'Day, Trish; Kan, Tsz Yin

    2013-08-01

    The aims of the study were to validate the English and Spanish Versions of the Health-Promoting Lifestyle Profile II (HPLP II) with Hispanic male truck drivers and to determine if there were any differences in drivers' responses based on driving responsibility. The methods included a descriptive correlation design, the HPLP II (English and Spanish versions), and a demographic questionnaire. Fifty-two Hispanic drivers participated in the study. There were no significant differences in long haul and short haul drivers' responses to the HPLP II. Cronbach's alpha for the Spanish version was .97 and the subscales alphas ranged from .74 to .94. The English version alpha was .92 and the subscales ranged from .68 to .84. Findings suggest the subscales of Health Responsibility, Physical Activities, Nutrition, and Spirituality Growth on the HPLP II Spanish and English versions may not adequately assess health-promoting behaviors and cultural influences for the Hispanic male population in the southwestern border region.

  19. A prediction model for ocular damage - Experimental validation.

    Science.gov (United States)

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Design, simulation, and experimental verification of a computer model and enhanced position estimator for the NPS AUV II

    OpenAIRE

    Warner, David C.

    1991-01-01

    A full six-degree-of-freedom computer model of the Naval Postgraduate School Autonomous Underwater Vehicle (NPS AUV II) is developed. Hydrodynamic Coefficients are determined by geometric similarity with an existing swimmer delivery vehicle and analysis of initial open loop AUV II trials. Comparisons between simulated and experimental results demonstrate the validity of the model and the techniques used. A reduced order observer of lateral velocity was produced to provide an input for an enha...

  1. The hypothetical world of CoMFA and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Oprea, T.I. [Los Alamos National Lab., NM (United States)

    1996-12-31

    CoMFA is a technique used to establish the three-dimensional similarity of molecular structures, in relationship to a target property. Because the risk of chance correlation is high, validation is required for all CoMFA models. The following validation steps should be performed: the choice of alignment rules (superimposition and conformer criteria) has to use experimental data when available, or different (alternate) hypotheses; statistical methods (e.g., cross-validation with randomized groups), have to emphasize simplicity, robustness, predictivity and explanatory power. When several CoMFA-QSAR models on similar targets and/or structures are available, qualitative lateral validation can be applied. This meta-analysis for CoMFA models offers a broader perspective on the similarities and differences between compared biological targets, with potential applications in rational drug design [e.g., selectivity, efficacy] and environmental toxicology. Examples that focus on validation of CoMFA models include the following steroid-binding proteins: aromatase, the estrogen and the androgen receptors, a monoclonal antibody against progesterone and two steroid binding globulins.

  2. The Validation of Climate Models: The Development of Essential Practice

    Science.gov (United States)

    Rood, R. B.

    2011-12-01

    It is possible from both a scientific and philosophical perspective to state that climate models cannot be validated. However, with the realization that the scientific investigation of climate change is as much a subject of politics as of science, maintaining this formal notion of "validation" has significant consequences. For example, it relegates the bulk of work of many climate scientists to an exercise of model evaluation that can be construed as ill-posed. Even within the science community this motivates criticism of climate modeling as an exercise of weak scientific practice. Stepping outside of the science community, statements that validation is impossible are used in political arguments to discredit the scientific investigation of climate, to maintain doubt about projections of climate change, and hence, to prohibit the development of public policy to regulate the emissions of greenhouse gases. With the acceptance of the impossibility of validation, scientists often state that the credibility of models can be established through an evaluation process. A robust evaluation process leads to the quantitative description of the modeling system against a standard set of measures. If this process is standardized as institutional practice, then this provides a measure of model performance from one modeling release to the next. It is argued, here, that such a robust and standardized evaluation of climate models can be structured and quantified as "validation." Arguments about the nuanced meaning of validation and evaluation are a subject about which the climate modeling community needs to develop a standard. It does injustice to a body of science-based knowledge to maintain that validation is "impossible." Rather than following such a premise, which immediately devalues the knowledge base, it is more useful to develop a systematic, standardized approach to robust, appropriate validation. This stands to represent the complexity of the Earth's climate and its

  3. Validity of NBME Parts I and II for the Selection of Residents: The Case of Orthopaedic Surgery.

    Science.gov (United States)

    Case, Susan M.

    The predictive validity of scores on the National Board of Medical Examiners (NBME) Part I and Part II examinations for the selection of residents in orthopaedic surgery was investigated. Use of NBME scores has been criticized because of the time lag between taking Part I and entering residency and because Part I content is not directly linked to…

  4. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  5. Development of the AGREE II, part 2: assessment of validity of items and tools to support application.

    NARCIS (Netherlands)

    Brouwers, M.C.; Kho, M.E.; Browman, G.P.; Burgers, J.S.; Cluzeau, F.; Feder, G.; Fervers, B.; Graham, I.D.; Hanna, S.E.; Makarski, J.

    2010-01-01

    BACKGROUND: We established a program of research to improve the development, reporting and evaluation of practice guidelines. We assessed the construct validity of the items and user's manual in the beta version of the AGREE II. METHODS: We designed guideline excerpts reflecting high-and low-quality

  6. PowerShades II. Optimisation and validation of highly transparent photovoltaic. Final report

    Energy Technology Data Exchange (ETDEWEB)

    2010-07-15

    The objective of the project is continued development and validation of a novel Danish photovoltaic product with the work title ''PowerShade''. The PowerShade insulating glazing unit (IGU) is a combination of a strong solar shading device and a power producing photovoltaic coating. The core technology in the PowerShade IGU is a thin film silicon photovoltaic generator applied to a micro structured substrate. The geometry of the substrate provides the unique combination of properties that characterizes the PowerShade module - strong progressive shading, high transparency, and higher electrical output than other semitransparent photovoltaic products with similar transparencies. The project activities fall in two categories, namely development of the processing/product and validation of the product properties. The development part of the project is focussed on increasing the efficiency of the photovoltaic generator by changing from a single-stack type cell to a tandem-stack type cell. The inclusion of PowerShade cells in insulating glazing (IG) units is also addressed in this project. The validation part of the project aims at validation of stability, thermal and optical properties as well as validation of the electrical yield of the product. The validation of thermal and optical properties has been done using full size modules installed in a test facility built during the 2006-08 ''PowerShades'' project. The achieved results will be vital in the coming realisation of a commercial product. Initial processing steps have been automated, and more efficient tandem-type solar cells have been developed. A damp heat test of an IGU has been carried out without any degradation of the solar cell. The PowerShade module assembly concept has been further developed and discussed with different automation equipment vendors and a pick-and-place tool developed. PowerShade's influence on the indoor climate has been modelled and verified by

  7. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  8. Validation of the Brazilian Portuguese version of the Beck Depression Inventory-II in a community sample.

    Science.gov (United States)

    Gomes-Oliveira, Marcio Henrique; Gorenstein, Clarice; Lotufo Neto, Francisco; Andrade, Laura Helena; Wang, Yuan Pang

    2012-12-01

    The Beck Depression Inventory (BDI) is used worldwide for detecting depressive symptoms. This questionnaire has been revised (1996) to match the DSM-IV criteria for a major depressive episode. We assessed the reliability and the validity of the Brazilian Portuguese version of the BDI-II for non-clinical adults. The questionnaire was applied to 60 college students on two occasions. Afterwards, 182 community-dwelling adults completed the BDI-II, the Self-Report Questionnaire, and the K10 Scale. Trained psychiatrists performed face-to-face interviews with the respondents using the Structured Clinical Interview (SCID-I), the Montgomery-Åsberg Depression Scale, and the Hamilton Anxiety Scale. Descriptive analysis, signal detection analysis (Receiver Operating Characteristics), correlation analysis, and discriminant function analysis were performed to investigate the psychometric properties of the BDI-II. The intraclass correlation coefficient of the BDI-II was 0.89, and the Cronbach's alpha coefficient of internal consistency was 0.93. Taking the SCID as the gold standard, the cut-off point of 10/11 was the best threshold for detecting depression, yielding a sensitivity of 70% and a specificity of 87%. The concurrent validity (a correlation of 0.63-0.93 with scales applied simultaneously) and the predictive ability of the severity level (over 65% correct classification) were acceptable. The BDI-II is reliable and valid for measuring depressive symptomatology among Portuguese-speaking Brazilian non-clinical populations.

  9. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  10. Nonequilibrium stage modelling of dividing wall columns and experimental validation

    Science.gov (United States)

    Hiller, Christoph; Buck, Christina; Ehlers, Christoph; Fieg, Georg

    2010-11-01

    Dealing with complex process units like dividing wall columns pushes the focus on the determination of suitable modelling approaches. For this purpose a nonequilibrium stage model is developed. The successful validation is achieved by an experimental investigation of fatty alcohol mixtures under vacuum condition at pilot scale. Aim is the recovery of high purity products. The proposed model predicts the product qualities and temperature profiles very well.

  11. Human surrogate models of neuropathic pain: validity and limitations.

    Science.gov (United States)

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research?

  12. Blast Load Simulator Experiments for Computational Model Validation: Report 2

    Science.gov (United States)

    2017-02-01

    O’Daniel, 2016. Blast load simulator experiments for computational model validation – Report 1. ERDC/GSL TR-16-27. Vicksburg, MS: U.S. Army Engineer ...ER D C/ G SL T R- 16 -2 7 Blast Load Simulator Experiments for Computational Model Validation Report 2 G eo te ch ni ca l a nd S tr uc...Approved for public release; distribution is unlimited. The U.S. Army Engineer Research and Development Center (ERDC) solves the nation’s toughest

  13. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  14. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  15. MODEL-BASED VALIDATION AND VERIFICATION OF ANOMALIES IN LEGISLATION

    Directory of Open Access Journals (Sweden)

    Vjeran Strahonja

    2006-12-01

    Full Text Available An anomaly in legislation is absence of completeness, consistency and other desirable properties, caused by different semantic, syntactic or pragmatic reasons. In general, the detection of anomalies in legislation comprises validation and verification. The basic idea of research, as presented in this paper, is modelling legislation by capturing domain knowledge of legislation and specifying it in a generic way by using commonly agreed and understandable modelling concepts of the Unified Modelling Language (UML. Models of legislation enable to understand the system better, support the detection of anomalies and help to improve the quality of legislation by validation and verification. By implementing model-based approach, the object of validation and verification moves from legislation to its model. The business domain of legislation has two distinct aspects: a structural or static aspect (functionality, business data etc., and a behavioural or dynamic part (states, transitions, activities, sequences etc.. Because anomalism can occur on two different levels, on the level of a model, or on the level of legislation itself, a framework for validation and verification of legal regulation and its model is discussed. The presented framework includes some significant types of semantic and syntactic anomalies. Some ideas for assessment of pragmatic anomalies of models were found in the field of software quality metrics. Thus pragmatic features and attributes can be determined that could be relevant for evaluation purposes of models. Based on analogue standards for the evaluation of software, a qualitative and quantitative scale can be applied to determine the value of some feature for a specific model.

  16. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    Science.gov (United States)

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting.

  17. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  18. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  19. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    Science.gov (United States)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  20. System Modeling, Validation, and Design of Shape Controllers for NSTX

    Science.gov (United States)

    Walker, M. L.; Humphreys, D. A.; Eidietis, N. W.; Leuer, J. A.; Welander, A. S.; Kolemen, E.

    2011-10-01

    Modeling of the linearized control response of plasma shape and position has become fairly routine in the last several years. However, such response models rely on the input of accurate values of model parameters such as conductor and diagnostic sensor geometry and conductor resistivity or resistance. Confidence in use of such a model therefore requires that some effort be spent in validating that the model has been correctly constructed. We describe the process of constructing and validating a response model for NSTX plasma shape and position control, and subsequent use of that model for the development of shape and position controllers. The model development, validation, and control design processes are all integrated within a Matlab-based toolset known as TokSys. The control design method described emphasizes use of so-called decoupling control, in which combinations of coil current modifications are designed to modify only one control parameter at a time, without perturbing any other control parameter values. Work supported by US DOE under DE-FG02-99ER54522 and DE-AC02-09CH11466.

  1. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    Science.gov (United States)

    Apostolakis, J.; Asai, M.; Bagulya, A.; Brown, J. M. C.; Burkhardt, H.; Chikuma, N.; Cortes-Giraldo, M. A.; Elles, S.; Grichine, V.; Guatelli, S.; Incerti, S.; Ivanchenko, V. N.; Jacquemier, J.; Kadri, O.; Maire, M.; Pandola, L.; Sawkey, D.; Toshito, T.; Urban, L.; Yamashita, T.

    2015-12-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  2. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    CERN Document Server

    Apostolakis, J; Bagulya, A; Brown, J M C; Burkhardt, H; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Grichine, V; Guatelli, S; Incerti, S; Ivanchenko, V N; Jacquemier, J; Kadri, O; Maire, M; Pandola, L; Sawkey, D; Toshito, T; Urban, L; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  3. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    CERN Document Server

    Apostolakis, J; Bagulya, A; Brown, J M C; Burkhardt, H; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Grichine, V; Guatelli, S; Incerti, S; Ivanchenko, V N; Jacquemier, J; Kadri, O; Maire, M; Pandola, L; Sawkey, D; Toshito, T; Urban, L; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  4. Cross-validation model assessment for modular networks

    CERN Document Server

    Kawamoto, Tatsuro

    2016-01-01

    Model assessment of the stochastic block model is a crucial step in identification of modular structures in networks. Although this has typically been done according to the principle that a parsimonious model with a large marginal likelihood or a short description length should be selected, another principle is that a model with a small prediction error should be selected. We show that the leave-one-out cross-validation estimate of the prediction error can be efficiently obtained using belief propagation for sparse networks. Furthermore, the relations among the objectives for model assessment enable us to determine the exact cause of overfitting.

  5. Model Validation for Shipboard Power Cables Using Scattering Parameters%Model Validation for Shipboard Power Cables Using Scattering Parameters

    Institute of Scientific and Technical Information of China (English)

    Lukas Graber; Diomar Infante; Michael Steurer; William W. Brey

    2011-01-01

    Careful analysis of transients in shipboard power systems is important to achieve long life times of the com ponents in future all-electric ships. In order to accomplish results with high accuracy, it is recommended to validate cable models as they have significant influence on the amplitude and frequency spectrum of voltage transients. The authors propose comparison of model and measurement using scattering parameters. They can be easily obtained from measurement and simulation and deliver broadband information about the accuracy of the model. The measurement can be performed using a vector network analyzer. The process to extract scattering parameters from simulation models is explained in detail. Three different simulation models of a 5 kV XLPE power cable have been validated. The chosen approach delivers an efficient tool to quickly estimate the quality of a model.

  6. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  7. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show......Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...

  8. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  9. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided i

  10. Validation of Geant4 hadronic physics models at intermediate energies

    Science.gov (United States)

    Banerjee, Sunanda; Geant4 Hadronic Group

    2010-04-01

    GEANT4 provides a number of physics models at intermediate energies (corresponding to incident momenta in the range 1-20 GeV/c). Recently, these models have been validated with existing data from a number of experiments: (a) inclusive proton and neutron production with a variety of beams (π-, π+, p) at different energies between 1 and 9 GeV/c on a number of nuclear targets (from beryllium to uranium); (2) inclusive pion/kaon/proton production from 14.6 GeV/c proton beams on nuclear targets (from beryllium to gold); (3) inclusive pion production from pion beams between 3-13 GeV/c on a number of nuclear targets (from beryllium to lead). The results of simulation/data comparison for different GEANT4 models are discussed in the context of validating the models and determining their usage in physics lists for high energy application. Due to the increasing number of validations becoming available, and the requirement that they be done at regular intervals corresponding to the GEANT4 release schedule, automated methods of validation are being developed.

  11. Empirical validation data sets for double skin facade models

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate emp...

  12. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  13. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of conta

  14. ID Model Construction and Validation: A Multiple Intelligences Case

    Science.gov (United States)

    Tracey, Monica W.; Richey, Rita C.

    2007-01-01

    This is a report of a developmental research study that aimed to construct and validate an instructional design (ID) model that incorporates the theory and practice of multiple intelligences (MI). The study consisted of three phases. In phase one, the theoretical foundations of multiple Intelligences and ID were examined to guide the development…

  15. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach.......Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  16. Bibliometric Modeling Processes and the Empirical Validity of Lotka's Law.

    Science.gov (United States)

    Nicholls, Paul Travis

    1989-01-01

    Examines the elements involved in fitting a bibliometric model to empirical data, proposes a consistent methodology for applying Lotka's law, and presents the results of an empirical test of the methodology. The results are discussed in terms of the validity of Lotka's law and the suitability of the proposed methodology. (49 references) (CLB)

  17. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of conta

  18. Validity and Reliability Determination of Denver Developmental Screening Test-II in 0-6 Year-Olds in Tehran.

    Science.gov (United States)

    Shahshahani, Soheila; Vameghi, Roshanak; Azari, Nadia; Sajedi, Firoozeh; Kazemnejad, Anooshirvan

    2010-09-01

    This research was designed to identify the validity and reliability of the Persian version of Denver Developmental Screening Test II (DDST-II) in Iranian children, in order to provide an appropriate developmental screening tool for Iranian child health workers. At first a precise translation of test was done by three specialists in English literature and then it was revised by three pediatricians familiar with developmental domains. Then, DDST-II was performed on 221 children ranging from 0 to 6 years, in four Child Health Clinics, in north, south, east and west regions of Tehran city. In order to determine the agreement coefficient, these children were also evaluated by ASQ test. Because ASQ is designed to use for 4-60 month- old children, children who were out of this rang were evaluated by developmental pediatricians. Available sampling was used. Obtained data was analyzed by SPSS software. Developmental disorders were observed in 34% of children who were examined by DDST-II, and in 12% of children who were examined by ASQ test. The estimated consistency coefficient between DDST-II and ASQ was 0.21, which is weak, and between DDST-II and the physicians' examination was 0.44. The content validity of DDST-II was verified by reviewing books and journals, and by specialists' opinions. All of the questions in DDST-II had appropriate content validity, and there was no need to change them. Test-retest and Inter-rater methods were used in order to determine reliability of the test, by Cronbach's α and Kauder-Richardson coefficients. Kauder-Richardson coefficient for different developmental domains was between 61% and 74%, which is good. Cronbach's α coefficient and Kappa measure of agreement for test-retest were 92% and 87% and for Inter-rater 90% and 76%, respectively. This research showed that Persian version of DDST-II has a good validity and reliability, and can be used as a screening tool for developmental screening of children in Tehran city.

  19. Validating firn compaction model with remote sensing data

    DEFF Research Database (Denmark)

    Simonsen, S. B.; Stenseng, Lars; Sørensen, Louise Sandberg

    A comprehensive understanding of firn processes is of outmost importance, when estimating present and future changes of the Greenland Ice Sheet. Especially, when remote sensing altimetry is used to assess the state of ice sheets and their contribution to global sea level rise, firn compaction...... models have been shown to be a key component. Now, remote sensing data can also be used to validate the firn models. Radar penetrating the upper part of the firn column in the interior part of Greenland shows a clear layering. The observed layers from the radar data can be used as an in-situ validation...... correction relative to the changes in the elevation of the surface observed with remote sensing altimetry? What model time resolution is necessary to resolved the observed layering? What model refinements are necessary to give better estimates of the surface mass balance of the Greenland ice sheet from...

  20. Integrating Seasonal Oscillations into Basel II Behavioural Scoring Models

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-09-01

    Full Text Available The article introduces a new methodology of temporal influence measurement (seasonal oscillations, temporal patterns for behavioural scoring development purposes. The paper shows how significant temporal variables can be recognised and then integrated into the behavioural scoring models in order to improve model performance. Behavioural scoring models are integral parts of the Basel II standard on Internal Ratings-Based Approaches (IRB. The IRB approach much more precisely reflects individual risk bank profile.A solution of the problem of how to analyze and integrate macroeconomic and microeconomic factors represented in time series into behavioural scorecard models will be shown in the paper by using the REF II model.

  1. Asymmetric Gepner Models II. Heterotic Weight Lifting

    CERN Document Server

    Gato-Rivera, B

    2010-01-01

    A systematic study of "lifted" Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the $E_8$ factor by a modular isomorphic $N=0$ model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  2. Asymmetric Gepner models II. Heterotic weight lifting

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)

    2011-05-21

    A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E{sub 8} factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  3. Validation of a Model for Ice Formation around Finned Tubes

    OpenAIRE

    Kamal A. R. Ismai; Fatima A. M. Lino

    2016-01-01

    Phase change materials although attaractive option for thermal storage applications its main drawback is the slow thermal response during charging and discharging processes due to their low thermal conductivity. The present study validates a model developed by the authors some years ago on radial fins as a method to meliorate the thermal performance of PCM in horizontal storage system. The developed model for the radial finned tube is based on pure conduction, the enthalpy approach and was di...

  4. Toward metrics and model validation in web-site QEM

    OpenAIRE

    Olsina Santos, Luis Antonio; Pons, Claudia; Rossi, Gustavo Héctor

    2000-01-01

    In this work, a conceptual framework and the associated strategies for metrics and model validation are analyzed regarding website measurement and evaluation. Particularly, we have conducted three case studies in different Web domains in order to evaluate and compare the quality of sites. For such an end the quantitative, model-based methodology, so-called Web-site QEM (Quality Evaluation Methodology), was utilized. In the assessment process of sites, definition of attributes and measurements...

  5. Validating firn compaction model with remote sensing data

    OpenAIRE

    2011-01-01

    A comprehensive understanding of firn processes is of outmost importance, when estimating present and future changes of the Greenland Ice Sheet. Especially, when remote sensing altimetry is used to assess the state of ice sheets and their contribution to global sea level rise, firn compaction models have been shown to be a key component. Now, remote sensing data can also be used to validate the firn models. Radar penetrating the upper part of the firn column in the interior part of Greenland ...

  6. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  7. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J.; Sako, M. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N. [LPNHE, CNRS/IN2P3, Université Pierre et Marie Curie Paris 6, Universié Denis Diderot Paris 7, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Kessler, R.; Frieman, J. A. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Marriner, J. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Biswas, R.; Kuhlmann, S. [Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Schneider, D. P., E-mail: kessler@kicp.chicago.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  8. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    CERN Document Server

    Mosher, J; Kessler, R; Astier, P; Marriner, J; Betoule, M; Sako, M; El-Hage, P; Biswas, R; Pain, R; Kuhlmann, S; Regnault, N; Frieman, J A; Schneider, D P

    2014-01-01

    We use simulated SN Ia samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and the bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: 120 low-redshift (z < 0.1) SNe Ia, 255 SDSS SNe Ia (z < 0.4), and 290 SNLS SNe Ia (z <= 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (winput - wrecovered) ranging from -0.005 +/- 0.012 to -0.024 +/- 0.010. These biases a...

  9. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  10. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    Science.gov (United States)

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no" sufficient…

  11. Validity of the Bersohn–Zewail model beyond justification

    DEFF Research Database (Denmark)

    Petersen, Jakob; Henriksen, Niels Engholm; Møller, Klaus Braagaard

    2012-01-01

    The absorption of probe pulses in ultrafast pump–probe experiments can be determined from the Bersohn–Zewail (BZ) model. The model relies on classical mechanics to describe the dynamics of the nuclei in the excited electronic state prepared by the ultrashort pump pulse. The BZ model provides...... excellent agreement between the classical trajectory and the average position of the excited state wave packet. By investigating the approximations connecting the nuclear dynamics described by quantum mechanics and the BZ model, we conclude that this agreement goes far beyond the validity of the individual...

  12. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    Energy Technology Data Exchange (ETDEWEB)

    Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk [National Physical Laboratory, Hampton Road, TW11 0LW Middlesex (United Kingdom); Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk [National Physical Laboratory, Hampton Road, TW11 0LW Middlesex (United Kingdom); Department of Materials, Imperial College London, London SW7 2AZ (United Kingdom)

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  13. Validation of a Model for Teaching Canine Fundoscopy.

    Science.gov (United States)

    Nibblett, Belle Marie D; Pereira, Mary Mauldin; Williamson, Julie A; Sithole, Fortune

    2015-01-01

    A validated teaching model for canine fundoscopic examination was developed to improve Day One fundoscopy skills while at the same time reducing use of teaching dogs. This novel eye model was created from a hollow plastic ball with a cutout for the pupil, a suspended 20-diopter lens, and paint and paper simulation of relevant eye structures. This eye model was mounted on a wooden stand with canine head landmarks useful in performing fundoscopy. Veterinary educators performed fundoscopy using this model and completed a survey to establish face and content validity. Subsequently, veterinary students were randomly assigned to pre-laboratory training with or without the use of this teaching model. After completion of an ophthalmology laboratory on teaching dogs, student outcome was assessed by measuring students' ability to see a symbol inserted on the simulated retina in the model. Students also completed a survey regarding their experience with the model and the laboratory. Overall, veterinary educators agreed that this eye model was well constructed and useful in teaching good fundoscopic technique. Student performance of fundoscopy was not negatively impacted by the use of the model. This novel canine model shows promise as a teaching and assessment tool for fundoscopy.

  14. FDA 2011 process validation guidance: lifecycle compliance model.

    Science.gov (United States)

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  15. Validation of coastal oceanographic models at Forsmark. Site descriptive modelling SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-01-15

    few months; (ii) Both 3D-models miss some rapid up..and down-welling episodes that were clearly registered on all salinity- and temperature meters near the northern interface; (iii) The velocity profiles measured at the interface between the two nested models display a low but mainly positive correlation; (iv) The salinity dynamics in the interior station is fully acceptably simulated with improved correlation coefficients towards the surface; (v) The temperature profiles also generally display a high correlation between measurements and simulated data, certifying that the heat transfer through the surface is acceptably well simulated to render the salinity the dominating factor determining the density, but yet leaving room for further improvements. It seems safe to conclude that the validation of velocity components has confirmed what has been found in many instances previously, namely that this is a challenge that demands considerably more measuring effort than has been possible to muster in this study in order to average out sub-grid eddies that the model grid does not resolve. For the scalar fields temperature is acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. The internal salinity dynamics is the strong point of the model. Its temporal development at the inner station is convincingly well reproduced by this model approach. This means that the overall computed water exchange of the Oeregrundsgrepen can continued to be invested with due confidence

  16. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  17. Propeller aircraft interior noise model utilization study and validation

    Science.gov (United States)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  18. Dynamic validation of the Planck/LFI thermal model

    CERN Document Server

    Tomasi, M; Gregorio, A; Colombo, F; Lapolla, M; Terenzi, L; Morgante, G; Bersanelli, M; Butler, R C; Galeotta, S; Mandolesi, N; Maris, M; Mennella, A; Valenziano, L; Zacchei, A; 10.1088/1748-0221/5/01/T01002

    2010-01-01

    The Low Frequency Instrument (LFI) is an array of cryogenically cooled radiometers on board the Planck satellite, designed to measure the temperature and polarization anisotropies of the cosmic microwave backgrond (CMB) at 30, 44 and 70 GHz. The thermal requirements of the LFI, and in particular the stringent limits to acceptable thermal fluctuations in the 20 K focal plane, are a critical element to achieve the instrument scientific performance. Thermal tests were carried out as part of the on-ground calibration campaign at various stages of instrument integration. In this paper we describe the results and analysis of the tests on the LFI flight model (FM) performed at Thales Laboratories in Milan (Italy) during 2006, with the purpose of experimentally sampling the thermal transfer functions and consequently validating the numerical thermal model describing the dynamic response of the LFI focal plane. This model has been used extensively to assess the ability of LFI to achieve its scientific goals: its valid...

  19. Validation of a finite element model of the human metacarpal.

    Science.gov (United States)

    Barker, D S; Netherway, D J; Krishnan, J; Hearn, T C

    2005-03-01

    Implant loosening and mechanical failure of components are frequently reported following metacarpophalangeal (MCP) joint replacement. Studies of the mechanical environment of the MCP implant-bone construct are rare. The objective of this study was to evaluate the predictive ability of a finite element model of the intact second human metacarpal to provide a validated baseline for further mechanical studies. A right index human metacarpal was subjected to torsion and combined axial/bending loading using strain gauge (SG) and 3D finite element (FE) analysis. Four different representations of bone material properties were considered. Regression analyses were performed comparing maximum and minimum principal surface strains taken from the SG and FE models. Regression slopes close to unity and high correlation coefficients were found when the diaphyseal cortical shell was modelled as anisotropic and cancellous bone properties were derived from quantitative computed tomography. The inclusion of anisotropy for cortical bone was strongly influential in producing high model validity whereas variation in methods of assigning stiffness to cancellous bone had only a minor influence. The validated FE model provides a tool for future investigations of current and novel MCP joint prostheses.

  20. Measuring avoidance of pain : validation of the Acceptance and Action Questionnaire II-pain version

    NARCIS (Netherlands)

    Reneman, Michiel F.; Kleen, Marco; Trompetter, Hester R.; Schiphorst Preuper, Henrica R.; Koeke, Albere; van Baalen, Bianca; Schreurs, Karlein M. G.

    Psychometric research on widely used questionnaires aimed at measuring experiential avoidance of chronic pain has led to inconclusive results. To test the structural validity, internal consistency, and construct validity of a recently developed short questionnaire: the Acceptance and Action

  1. Measuring avoidance of pain : validation of the Acceptance and Action Questionnaire II-pain version

    NARCIS (Netherlands)

    Reneman, Michiel F.; Kleen, Marco; Trompetter, Hester R.; Schiphorst Preuper, Henrica R.; Koeke, Albere; van Baalen, Bianca; Schreurs, Karlein M. G.

    2014-01-01

    Psychometric research on widely used questionnaires aimed at measuring experiential avoidance of chronic pain has led to inconclusive results. To test the structural validity, internal consistency, and construct validity of a recently developed short questionnaire: the Acceptance and Action Question

  2. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns.

    Directory of Open Access Journals (Sweden)

    Guillaume Chérel

    Full Text Available Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model's predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic.

  3. Standard solar model. II - g-modes

    Science.gov (United States)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  4. Predicting the ungauged basin: model validation and realism assessment

    Science.gov (United States)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2016-04-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

  5. A verification and validation process for model-driven engineering

    Science.gov (United States)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  6. Aqueous Solution Vessel Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  7. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  8. Validation of Advanced EM Models for UXO Discrimination

    CERN Document Server

    Weichman, Peter B

    2012-01-01

    The work reported here details basic validation of our advanced physics-based EMI forward and inverse models against data collected by the NRL TEMTADS system. The data was collected under laboratory-type conditions using both artificial spheroidal targets and real UXO. The artificial target models are essentially exact, and enable detailed comparison of theory and data in support of measurement platform characterization and target identification. Real UXO targets cannot be treated exactly, but it is demonstrated that quantitative comparisons of the data with the spheroid models nevertheless aids in extracting key target discrimination information, such as target geometry and hollow target shell thickness.

  9. Experimental validation of a solar-chimney power plant model

    Science.gov (United States)

    Fathi, Nima; Wayne, Patrick; Trueba Monje, Ignacio; Vorobieff, Peter

    2016-11-01

    In a solar chimney power plant system (SCPPS), the energy of buoyant hot air is converted to electrical energy. SCPPS includes a collector at ground level covered with a transparent roof. Solar radiation heats the air inside and the ground underneath. There is a tall chimney at the center of the collector, and a turbine located at the base of the chimney. Lack of detailed experimental data for validation is one of the important issues in modeling this type of power plants. We present a small-scale experimental prototype developed to perform validation analysis for modeling and simulation of SCCPS. Detailed velocity measurements are acquired using particle image velocimetry (PIV) at a prescribed Reynolds number. Convection is driven by a temperature-controlled hot plate at the bottom of the prototype. Velocity field data are used to perform validation analysis and measure any mismatch of the experimental results and the CFD data. CFD Code verification is also performed, to assess the uncertainly of the numerical model with respect to our grid and the applied mathematical model. The dimensionless output power of the prototype is calculated and compared with a recent analytical solution and the experimental results.

  10. Development and validation of a building design waste reduction model.

    Science.gov (United States)

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings.

  11. Calibration of Predictor Models Using Multiple Validation Experiments

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  12. Finite Element Model and Validation of Nasal Tip Deformation.

    Science.gov (United States)

    Manuel, Cyrus T; Harb, Rani; Badran, Alan; Ho, David; Wong, Brian J F

    2017-03-01

    Nasal tip mechanical stability is important for functional and cosmetic nasal airway surgery. Palpation of the nasal tip provides information on tip strength to the surgeon, though it is a purely subjective assessment. Providing a means to simulate nasal tip deformation with a validated model can offer a more objective approach in understanding the mechanics and nuances of the nasal tip support and eventual nasal mechanics as a whole. Herein we present validation of a finite element (FE) model of the nose using physical measurements recorded using an ABS plastic-silicone nasal phantom. Three-dimensional photogrammetry was used to capture the geometry of the phantom at rest and while under steady state load. The silicone used to make the phantom was mechanically tested and characterized using a linear elastic constitutive model. Surface point clouds of the silicone and FE model were compared for both the loaded and unloaded state. The average Hausdorff distance between actual measurements and FE simulations across the nose were 0.39 ± 1.04 mm and deviated up to 2 mm at the outermost boundaries of the model. FE simulation and measurements were in near complete agreement in the immediate vicinity of the nasal tip with millimeter accuracy. We have demonstrated validation of a two-component nasal FE model, which could be used to model more complex modes of deformation where direct measurement may be challenging. This is the first step in developing a nasal model to simulate nasal mechanics and ultimately the interaction between geometry and airflow.

  13. Estimating the predictive validity of diabetic animal models in rosiglitazone studies.

    Science.gov (United States)

    Varga, O E; Zsíros, N; Olsson, I A S

    2015-06-01

    For therapeutic studies, predictive validity of animal models - arguably the most important feature of animal models in terms of human relevance - can be calculated retrospectively by obtaining data on treatment efficacy from human and animal trials. Using rosiglitazone as a case study, we aim to determine the predictive validity of animal models of diabetes, by analysing which models perform most similarly to humans during rosiglitazone treatment in terms of changes in standard diabetes diagnosis parameters (glycosylated haemoglobin [HbA1c] and fasting glucose levels). A further objective of this paper was to explore the impact of four covariates on the predictive capacity: (i) diabetes induction method; (ii) drug administration route; (iii) sex of animals and (iv) diet during the experiments. Despite the variable consistency of animal species-based models with the human reference for glucose and HbA1c treatment effects, our results show that glucose and HbA1c treatment effects in rats agreed better with the expected values based on human data than in other species. Induction method was also found to be a substantial factor affecting animal model performance. The study concluded that regular reassessment of animal models can help to identify human relevance of each model and adapt research design for actual research goals.

  14. Validation of the Osteopenia Sheep Model for Orthopaedic Biomaterial Research

    DEFF Research Database (Denmark)

    Ding, Ming; Danielsen, C.C.; Cheng, L.

    2009-01-01

    Validation of the Osteopenia Sheep Model for Orthopaedic Biomaterial Research +1Ding, M; 2Danielsen, CC; 1Cheng, L; 3Bollen, P; 4Schwarz, P; 1Overgaard, S +1Dept of Orthopaedics O, Odense University Hospital, Denmark, 2Dept of Connective Tissue Biology, University of Aarhus, Denmark, 3Biomedicine...... Lab, University of Southern Denmark, 4Dept of Geriatrics, Glostrup University Hospital, Denmark ming.ding@ouh.regionsyddanmark.dk   Introduction:  Currently, majority orthopaedic prosthesis and biomaterial researches have been based on investigation in normal animals. In most clinical situations, most...... resemble osteoporosis in humans. This study aimed to validate glucocorticoid-induced osteopenia sheep model for orthopaedic implant and biomaterial research. We hypothesized that a 7-month GC treatment together with restricted diet but without OVX would induce osteopenia. Materials and Methods: Eighteen...

  15. Seine estuary modelling and AirSWOT measurements validation

    Science.gov (United States)

    Chevalier, Laetitia; Lyard, Florent; Laignel, Benoit

    2013-04-01

    In the context of global climate change, knowing water fluxes and storage, from the global scale to the local scale, is a crucial issue. The future satellite SWOT (Surface Water and Ocean Topography) mission, dedicated to the surface water observation, is proposed to meet this challenge. SWOT main payload will be a Ka-band Radar Interferometer (KaRIn). To validate this new kind of measurements, preparatory airborne campaigns (called AirSWOT) are currently being designed. AirSWOT will carry an interferometer similar to Karin: Kaspar-Ka-band SWOT Phenomenology Airborne Radar. Some campaigns are planned in France in 2014. During these campaigns, the plane will fly over the Seine River basin, especially to observe its estuary, the upstream river main channel (to quantify river-aquifer exchange) and some wetlands. The present work objective is to validate the ability of AirSWOT and SWOT, using a Seine estuary hydrodynamic modelling. In this context, field measurements will be collected by different teams such as GIP (Public Interest Group) Seine Aval, the GPMR (Rouen Seaport), SHOM (Hydrographic and Oceanographic Service of the Navy), the IFREMER (French Research Institute for Sea Exploitation), Mercator-Ocean, LEGOS (Laboratory of Space Study in Geophysics and Oceanography), ADES (Data Access Groundwater) ... . These datasets will be used first to validate locally AirSWOT measurements, and then to improve a hydrodynamic simulations (using tidal boundary conditions, river and groundwater inflows ...) for AirSWOT data 2D validation. This modelling will also be used to estimate the benefit of the future SWOT mission for mid-latitude river hydrology. To do this modelling,the TUGOm barotropic model (Toulouse Unstructured Grid Ocean model 2D) is used. Preliminary simulations have been performed by first modelling and then combining to different regions: first the Seine River and its estuarine area and secondly the English Channel. These two simulations h are currently being

  16. A community diagnostic tool for chemistry climate model validation

    Directory of Open Access Journals (Sweden)

    A. Gettelman

    2012-09-01

    Full Text Available This technical note presents an overview of the Chemistry-Climate Model Validation Diagnostic (CCMVal-Diag tool for model evaluation. The CCMVal-Diag tool is a flexible and extensible open source package that facilitates the complex evaluation of global models. Models can be compared to other models, ensemble members (simulations with the same model, and/or many types of observations. The initial construction and application is to coupled chemistry-climate models (CCMs participating in CCMVal, but the evaluation of climate models that submitted output to the Coupled Model Intercomparison Project (CMIP is also possible. The package has been used to assist with analysis of simulations for the 2010 WMO/UNEP Scientific Ozone Assessment and the SPARC Report on the Evaluation of CCMs. The CCMVal-Diag tool is described and examples of how it functions are presented, along with links to detailed descriptions, instructions and source code. The CCMVal-Diag tool supports model development as well as quantifies model changes, both for different versions of individual models and for different generations of community-wide collections of models used in international assessments. The code allows further extensions by different users for different applications and types, e.g. to other components of the Earth system. User modifications are encouraged and easy to perform with minimum coding.

  17. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  18. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  19. Certified reduced basis model validation: A frequentistic uncertainty framework

    OpenAIRE

    Patera, A. T.; Huynh, Dinh Bao Phuong; Knezevic, David; Patera, Anthony T.

    2011-01-01

    We introduce a frequentistic validation framework for assessment — acceptance or rejection — of the consistency of a proposed parametrized partial differential equation model with respect to (noisy) experimental data from a physical system. Our method builds upon the Hotelling T[superscript 2] statistical hypothesis test for bias first introduced by Balci and Sargent in 1984 and subsequently extended by McFarland and Mahadevan (2008). Our approach introduces two new elements: a spectral repre...

  20. Trailing Edge Noise Model Validation and Application to Airfoil Optimization

    DEFF Research Database (Denmark)

    Bertagnolio, Franck; Aagaard Madsen, Helge; Bak, Christian

    2010-01-01

    The aim of this article is twofold. First, an existing trailing edge noise model is validated by comparing with airfoil surface pressure fluctuations and far field sound pressure levels measured in three different experiments. The agreement is satisfactory in one case but poor in two other cases...... across the boundary layer near the trailing edge and to a lesser extent by a smaller boundary layer displacement thickness. ©2010 American Society of Mechanical Engineers...

  1. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-12-15

    validation can be summarized in three points: (i) The Baltic CR-model reproduces the measured salinity and the temperature profiles of the three peripheral stations acceptably well, while the correlation levels of the velocities are on an acceptable level for only one component, the other being close to zero; (ii) For the interior station Si24, the FR-model reproduces the salinity and the temperature profiles with a yet improved level of correlation compared with the CR-model; (iii) The bottom current velocity measured at Djupesund corresponds to an internal strait within the CDB-model and yields a correlation level of nearly 50% for salinity and about 95% for temperature. The conclusion is that the present validation of velocity components of the peripheral stations between the CR- and FR-domains has mainly confirmed what was found in the corresponding validation study of the Forsmark area, namely that this represents a challenge that demands considerably more measuring effort than has been possible to muster presently in order to average out sub-grid eddies that the model cannot resolve. This applies even though the levels of the correlation analysis are considerably higher than was found for the parallel study of the waters off the Forsmark coast. This together with supporting current velocity transects in the vicinity of the measurement stations can be explained by a more horizontally homogeneous flow field. For the inner station (Si24) that was computed by the FR-model, the correlation levels are considerably improved. Also for the station (Si25) pertaining to the CDB-model good correlation levels are reproduced. All temperature profiles are also acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. As for the Forsmark validation program, the salinity dynamics of the interior FR-domain is the strong point of the model, but in the present study high levels of

  2. Multilayer piezoelectric transducer models combined with Field II

    DEFF Research Database (Denmark)

    Bæk, David; Willatzen, Morten; Jensen, Jørgen Arendt

    2012-01-01

    with a polymer ring, and submerged into water. The transducer models are developed to account for any external electrical loading impedance in the driving circuit. The models are adapted to calculate the surface acceleration needed by the Field II software in predicting pressure pulses at any location in front...

  3. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  4. A validated approach for modeling collapse of steel structures

    Science.gov (United States)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  5. Phase-II Clinical Validation of a Powered Exoskeleton for the Treatment of Elbow Spasticity

    Directory of Open Access Journals (Sweden)

    Simona Crea

    2017-05-01

    Full Text Available Introduction: Spasticity is a typical motor disorder in patients affected by stroke. Typically post-stroke rehabilitation consists of repetition of mobilization exercises on impaired limbs, aimed to reduce muscle hypertonia and mitigate spastic reflexes. It is currently strongly debated if the treatment's effectiveness improves with the timeliness of its adoption; in particular, starting intensive rehabilitation as close as possible to the stroke event may counteract the growth and postpone the onset of spasticity. In this paper we present a phase-II clinical validation of a robotic exoskeleton in treating subacute post-stroke patients.Methods: Seventeen post-stroke patients participated in 10 daily rehabilitation sessions using the NEUROExos Elbow Module exoskeleton, each one lasting 45 min: the exercises consisted of isokinetic passive mobilization of the elbow, with torque threshold to detect excessive user's resistance to the movement. We investigated the safety by reporting possible adverse events, such as mechanical, electrical or software failures of the device or injuries or pain experienced by the patient. As regards the efficacy, the Modified Ashworth Scale, was identified as primary outcome measure and the NEEM metrics describing elbow joint resistance to passive extension (i.e., maximum extension torque and zero-torque angle as secondary outcomes.Results: During the entire duration of the treatments no failures or adverse events for the patients were reported. No statistically significant differences were found in the Modified Ashworth Scale scores, between pre-treatment and post-treatment and between post-treatment and follow-up sessions, indicating the absence of spasticity increase throughout (14 days and after (3–4 months follow-up the treatment. Exoskeleton metrics confirmed the absence of significant difference in between pre- and post-treatment data, whereas intra-session data highlighted significant differences in the

  6. Phase-II Clinical Validation of a Powered Exoskeleton for the Treatment of Elbow Spasticity

    Science.gov (United States)

    Crea, Simona; Cempini, Marco; Mazzoleni, Stefano; Carrozza, Maria Chiara; Posteraro, Federico; Vitiello, Nicola

    2017-01-01

    Introduction: Spasticity is a typical motor disorder in patients affected by stroke. Typically post-stroke rehabilitation consists of repetition of mobilization exercises on impaired limbs, aimed to reduce muscle hypertonia and mitigate spastic reflexes. It is currently strongly debated if the treatment's effectiveness improves with the timeliness of its adoption; in particular, starting intensive rehabilitation as close as possible to the stroke event may counteract the growth and postpone the onset of spasticity. In this paper we present a phase-II clinical validation of a robotic exoskeleton in treating subacute post-stroke patients. Methods: Seventeen post-stroke patients participated in 10 daily rehabilitation sessions using the NEUROExos Elbow Module exoskeleton, each one lasting 45 min: the exercises consisted of isokinetic passive mobilization of the elbow, with torque threshold to detect excessive user's resistance to the movement. We investigated the safety by reporting possible adverse events, such as mechanical, electrical or software failures of the device or injuries or pain experienced by the patient. As regards the efficacy, the Modified Ashworth Scale, was identified as primary outcome measure and the NEEM metrics describing elbow joint resistance to passive extension (i.e., maximum extension torque and zero-torque angle) as secondary outcomes. Results: During the entire duration of the treatments no failures or adverse events for the patients were reported. No statistically significant differences were found in the Modified Ashworth Scale scores, between pre-treatment and post-treatment and between post-treatment and follow-up sessions, indicating the absence of spasticity increase throughout (14 days) and after (3–4 months follow-up) the treatment. Exoskeleton metrics confirmed the absence of significant difference in between pre- and post-treatment data, whereas intra-session data highlighted significant differences in the secondary outcomes

  7. Phase-II Clinical Validation of a Powered Exoskeleton for the Treatment of Elbow Spasticity.

    Science.gov (United States)

    Crea, Simona; Cempini, Marco; Mazzoleni, Stefano; Carrozza, Maria Chiara; Posteraro, Federico; Vitiello, Nicola

    2017-01-01

    Introduction: Spasticity is a typical motor disorder in patients affected by stroke. Typically post-stroke rehabilitation consists of repetition of mobilization exercises on impaired limbs, aimed to reduce muscle hypertonia and mitigate spastic reflexes. It is currently strongly debated if the treatment's effectiveness improves with the timeliness of its adoption; in particular, starting intensive rehabilitation as close as possible to the stroke event may counteract the growth and postpone the onset of spasticity. In this paper we present a phase-II clinical validation of a robotic exoskeleton in treating subacute post-stroke patients. Methods: Seventeen post-stroke patients participated in 10 daily rehabilitation sessions using the NEUROExos Elbow Module exoskeleton, each one lasting 45 min: the exercises consisted of isokinetic passive mobilization of the elbow, with torque threshold to detect excessive user's resistance to the movement. We investigated the safety by reporting possible adverse events, such as mechanical, electrical or software failures of the device or injuries or pain experienced by the patient. As regards the efficacy, the Modified Ashworth Scale, was identified as primary outcome measure and the NEEM metrics describing elbow joint resistance to passive extension (i.e., maximum extension torque and zero-torque angle) as secondary outcomes. Results: During the entire duration of the treatments no failures or adverse events for the patients were reported. No statistically significant differences were found in the Modified Ashworth Scale scores, between pre-treatment and post-treatment and between post-treatment and follow-up sessions, indicating the absence of spasticity increase throughout (14 days) and after (3-4 months follow-up) the treatment. Exoskeleton metrics confirmed the absence of significant difference in between pre- and post-treatment data, whereas intra-session data highlighted significant differences in the secondary outcomes

  8. Full-scale validation of a model of algal productivity.

    Science.gov (United States)

    Béchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2014-12-02

    While modeling algal productivity outdoors is crucial to assess the economic and environmental performance of full-scale cultivation, most of the models hitherto developed for this purpose have not been validated under fully relevant conditions, especially with regard to temperature variations. The objective of this study was to independently validate a model of algal biomass productivity accounting for both light and temperature and constructed using parameters experimentally derived using short-term indoor experiments. To do this, the accuracy of a model developed for Chlorella vulgaris was assessed against data collected from photobioreactors operated outdoor (New Zealand) over different seasons, years, and operating conditions (temperature-control/no temperature-control, batch, and fed-batch regimes). The model accurately predicted experimental productivities under all conditions tested, yielding an overall accuracy of ±8.4% over 148 days of cultivation. For the purpose of assessing the feasibility of full-scale algal cultivation, the use of the productivity model was therefore shown to markedly reduce uncertainty in cost of biofuel production while also eliminating uncertainties in water demand, a critical element of environmental impact assessments. Simulations at five climatic locations demonstrated that temperature-control in outdoor photobioreactors would require tremendous amounts of energy without considerable increase of algal biomass. Prior assessments neglecting the impact of temperature variations on algal productivity in photobioreactors may therefore be erroneous.

  9. Mineral vein dynamics modelling (FRACS II)

    Energy Technology Data Exchange (ETDEWEB)

    Urai, J.; Virgo, S.; Arndt, M. [RWTH Aachen (Germany); and others

    2016-08-15

    The Mineral Vein Dynamics Modeling group ''FRACS'' started out as a team of 7 research groups in its first phase and continued with a team of 5 research groups at the Universities of Aachen, Tuebingen, Karlsruhe, Mainz and Glasgow during its second phase ''FRACS 11''. The aim of the group was to develop an advanced understanding of the interplay between fracturing, fluid flow and fracture healing with a special emphasis on the comparison of field data and numerical models. Field areas comprised the Oman mountains in Oman (which where already studied in detail in the first phase), a siliciclastic sequence in the Internal Ligurian Units in Italy (closed to Sestri Levante) and cores of Zechstein carbonates from a Lean Gas reservoir in Northern Germany. Numerical models of fracturing, sealing and interaction with fluid that were developed in phase I where expanded in phase 11. They were used to model small scale fracture healing by crystal growth and the resulting influence on flow, medium scale fracture healing and its influence on successive fracturing and healing, as well as large scale dynamic fluid flow through opening and closing fractures and channels as a function of fluid overpressure. The numerical models were compared with structures in the field and we were able to identify first proxies for mechanical vein-hostrock properties and fluid overpressures versus tectonic stresses. Finally we propose a new classification of stylolites based on numerical models and observations in the Zechstein cores and continued to develop a new stress inversion tool to use stylolites to estimate depth of their formation.

  10. Validation of a Hertzian contact model with nonlinear damping

    Science.gov (United States)

    Sierakowski, Adam

    2015-11-01

    Due to limited spatial resolution, most disperse particle simulation methods rely on simplified models for incorporating short-range particle interactions. In this presentation, we introduce a contact model that combines the Hertz elastic restoring force with a nonlinear damping force, requiring only material properties and no tunable parameters. We have implemented the model in a resolved-particle flow solver that implements the Physalis method, which accurately captures hydrodynamic interactions by analytically enforcing the no-slip condition on the particle surface. We summarize the results of a few numerical studies that suggest the validity of the contact model over a range of particle interaction intensities (i.e., collision Stokes numbers) when compared with experimental data. This work was supported by the National Science Foundation under Grant Number CBET1335965 and the Johns Hopkins University Modeling Complex Systems IGERT program.

  11. Statistical validation of high-dimensional models of growing networks

    CERN Document Server

    Medo, Matus

    2013-01-01

    The abundance of models of complex networks and the current insufficient validation standards make it difficult to judge which models are strongly supported by data and which are not. We focus here on likelihood maximization methods for models of growing networks with many parameters and compare their performance on artificial and real datasets. While high dimensionality of the parameter space harms the performance of direct likelihood maximization on artificial data, this can be improved by introducing a suitable penalization term. Likelihood maximization on real data shows that the presented approach is able to discriminate among available network models. To make large-scale datasets accessible to this kind of analysis, we propose a subset sampling technique and show that it yields substantial model evidence in a fraction of time necessary for the analysis of the complete data.

  12. Model selection, identification and validation in anaerobic digestion: a review.

    Science.gov (United States)

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods.

  13. Validation of the WATEQ4 geochemical model for uranium

    Energy Technology Data Exchange (ETDEWEB)

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite (UO/sub 2/(OH)/sub 2/ . H/sub 2/O), UO/sub 2/(OH)/sub 2/, and rutherfordine ((UO/sub 2/CO/sub 3/) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions.

  14. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-12-15

    validation can be summarized in three points: (i) The Baltic CR-model reproduces the measured salinity and the temperature profiles of the three peripheral stations acceptably well, while the correlation levels of the velocities are on an acceptable level for only one component, the other being close to zero; (ii) For the interior station Si24, the FR-model reproduces the salinity and the temperature profiles with a yet improved level of correlation compared with the CR-model; (iii) The bottom current velocity measured at Djupesund corresponds to an internal strait within the CDB-model and yields a correlation level of nearly 50% for salinity and about 95% for temperature. The conclusion is that the present validation of velocity components of the peripheral stations between the CR- and FR-domains has mainly confirmed what was found in the corresponding validation study of the Forsmark area, namely that this represents a challenge that demands considerably more measuring effort than has been possible to muster presently in order to average out sub-grid eddies that the model cannot resolve. This applies even though the levels of the correlation analysis are considerably higher than was found for the parallel study of the waters off the Forsmark coast. This together with supporting current velocity transects in the vicinity of the measurement stations can be explained by a more horizontally homogeneous flow field. For the inner station (Si24) that was computed by the FR-model, the correlation levels are considerably improved. Also for the station (Si25) pertaining to the CDB-model good correlation levels are reproduced. All temperature profiles are also acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. As for the Forsmark validation program, the salinity dynamics of the interior FR-domain is the strong point of the model, but in the present study high levels of

  15. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  16. Experimental validation of flexible robot arm modeling and control

    Science.gov (United States)

    Ulsoy, A. Galip

    1989-01-01

    Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.

  17. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required....... There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  18. Validation of a Business Model for Cultural Heritage Institutions

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2015-01-01

    Full Text Available The paper proposes a business model for the efficiency optimization of the interaction between all actors involved in cultural heritage sector, such as galleries, libraries, archives and museums (GLAM. The validation of the business model is subject of analyses and implementations in a real environment made by different cultural institutions. The implementation of virtual exhibitions on mobile devices is described and analyzed as a key factor for increasing the cultural heritage visibility. New perspectives on the development of virtual exhibitions for mobile devices are considered. A study on the number of visitors of cultural institutions is carried out and ways to increase the number of visitors are described.

  19. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...

  20. Multidimensional chemical modelling, II. Irradiated outflow walls

    CERN Document Server

    Bruderer, Simon; Doty, Steven D; van Dishoeck, Ewine F; Bourke, Tyler L

    2009-01-01

    Observations of the high-mass star forming region AFGL 2591 reveal a large abundance of CO+, a molecule known to be enhanced by far UV (FUV) and X-ray irradiation. In chemical models assuming a spherically symmetric envelope, the volume of gas irradiated by protostellar FUV radiation is very small due to the high extinction by dust. The abundance of CO+ is thus underpredicted by orders of magnitude. In a more realistic model, FUV photons can escape through an outflow region and irradiate gas at the border to the envelope. Thus, we introduce the first 2D axi-symmetric chemical model of the envelope of a high-mass star forming region to explain the CO+ observations as a prototypical FUV tracer. The model assumes an axi-symmetric power-law density structure with a cavity due to the outflow. The local FUV flux is calculated by a Monte Carlo radiative transfer code taking scattering on dust into account. A grid of precalculated chemical abundances, introduced in the first part of this series of papers, is used to ...

  1. Validating a spatially distributed hydrological model with soil morphology data

    Directory of Open Access Journals (Sweden)

    T. Doppler

    2013-10-01

    Full Text Available Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas

  2. Organic acid modeling and model validation: Workshop summary. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  3. Organic acid modeling and model validation: Workshop summary

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  4. Packed bed heat storage: Continuum mechanics model and validation

    Science.gov (United States)

    Knödler, Philipp; Dreißigacker, Volker; Zunft, Stefan

    2016-05-01

    Thermal energy storage (TES) systems are key elements for various types of new power plant concepts. As possible cost-effective storage inventory option, packed beds of miscellaneous material come into consideration. However, high technical risks arise due to thermal expansion and shrinking of the packed bed's particles during cyclic thermal operation, possibly leading to material failure. Therefore, suitable tools for designing the heat storage system are mandatory. While particle discrete models offer detailed simulation results, the computing time for large scale applications is inefficient. In contrast, continuous models offer time-efficient simulation results but are in need of effective packed bed parameters. This work focuses on providing insight into some basic methods and tools on how to obtain such parameters and on how they are implemented into a continuum model. In this context, a particle discrete model as well as a test rig for carrying out uniaxial compression tests (UCT) is introduced. Performing of experimental validation tests indicate good agreement with simulated UCT results. In this process, effective parameters required for a continuous packed bed model were identified and used for continuum simulation. This approach is validated by comparing the simulated results with experimental data from another test rig. The presented method significantly simplifies subsequent design studies.

  5. Model calibration and validation of an impact test simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)

    2001-01-01

    This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.

  6. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  7. Argonne Bubble Experiment Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  8. Calibration and validation of DRAINMOD to model bioretention hydrology

    Science.gov (United States)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration

  9. Validating and Verifying Biomathematical Models of Human Fatigue

    Science.gov (United States)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  10. Multicomponent aerosol dynamics model UHMA: model development and validation

    Directory of Open Access Journals (Sweden)

    H. Korhonen

    2004-01-01

    Full Text Available A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory, as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and atmospheric transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3–4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  11. Multicomponent aerosol dynamics model UHMA: model development and validation

    Directory of Open Access Journals (Sweden)

    H. Korhonen

    2004-01-01

    Full Text Available A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory, as well as recent parameterizations for binary H2SO4–H2O and ternary H2SO4–NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3–4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  12. Multicomponent aerosol dynamics model UHMA: model development and validation

    Science.gov (United States)

    Korhonen, H.; Lehtinen, K. E. J.; Kulmala, M.

    2004-05-01

    A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model) was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory), as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and atmospheric transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3-4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  13. Solar Module Modeling, Simulation And Validation Under Matlab / Simulink

    Directory of Open Access Journals (Sweden)

    M.Diaw

    2016-09-01

    Full Text Available Solar modules are systems which convert sunlight into electricity using the physics of semiconductors. Mathematical modeling of these systems uses weather data such as irradiance and temperature as inputs. It provides the current, voltage or power as outputs, which allows plot the characteristic giving the intensity I as a function of voltage V for photovoltaic cells. In this work, we have developed a model for a diode of a Photovoltaic module under the Matlab / Simulink environment. From this model, we have plotted the characteristic curves I-V and P-V of solar cell for different values of temperature and sunlight. The validation has been done by comparing the experimental curve with power from a solar panel HORONYA 20W type with that obtained by the model.

  14. Model development and validation of a solar cooling plant

    Energy Technology Data Exchange (ETDEWEB)

    Zambrano, Darine; Garcia-Gabin, Winston [Escuela de Ingenieria Electrica, Facultad de Ingenieria, Universidad de Los Andes, La Hechicera, Merida 5101 (Venezuela); Bordons, Carlos; Camacho, Eduardo F. [Departamento de Ingenieria de Sistemas y Automatica, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de Los Descubrimientos s/n, Sevilla 41092 (Spain)

    2008-03-15

    This paper describes the dynamic model of a solar cooling plant that has been built for demonstration purposes using market-available technology and has been successfully operational since 2001. The plant uses hot water coming from a field of solar flat collectors which feed a single-effect absorption chiller of 35 kW nominal cooling capacity. The work includes model development based on first principles and model validation with a set of experiments carried out on the real plant. The simulation model has been done in a modular way, and can be adapted to other solar cooling-plants since the main modules (solar field, absorption machine, accumulators and auxiliary heater) can be easily replaced. This simulator is a powerful tool for solar cooling systems both during the design phase, when it can be used for component selection, and also for the development and testing of control strategies. (author)

  15. Dynamic Modeling of Wind Turbine Gearboxes and Experimental Validation

    DEFF Research Database (Denmark)

    Pedersen, Rune

    is presented. The model takes into account the effects of load and applied grinding corrections. The results are verified by comparing to simulated and experimental results reported in the existing literature. Using gear data loosely based on a 1 MW wind turbine gearbox, the gear mesh stiffness is expanded...... analysis in relation to gear dynamics. A multibody model of two complete 2.3MWwind turbine gearboxes mounted back-to-back in a test rig is built. The mean values of the proposed gear mesh stiffnesses are included. The model is validated by comparing with calculated and measured eigenfrequencies and mode...... shapes. The measured eigenfrequencies have been identified in accelerometer signals obtained during run-up tests. Since the calculated eigenfrequencies do not match the measured eigenfrequencies with sufficient accuracy, a model updating technique is applied to ensure a better match by adjusting...

  16. Strictly isospectral Bianchi type II cosmological models

    CERN Document Server

    Rosu, H C; Obregón, O

    1996-01-01

    We show that, in the Q=0 factor ordering, the Wheeler-DeWitt equation for the Bianchi type ll model with the Ansatz \\rm \\Psi=A\\, e^{\\pm \\Phi(q^{\\mu})}, due to its one-dimensional character, may be approached by the strictly isospectral Darboux-Witten technique in standard supersymmetric quantum mechanics. One-parameter families of cosmological potentials and normalizable `wavefunctions of the universe' are exhibited. The isospectral method can be used to introduce normalizable wavefunctions in quantum cosmology.

  17. Vibrations inside buildings due to subway railway traffic. Experimental validation of a comprehensive prediction model.

    Science.gov (United States)

    Lopes, Patrícia; Ruiz, Jésus Fernández; Alves Costa, Pedro; Medina Rodríguez, L; Cardoso, António Silva

    2016-10-15

    The present paper focuses on the experimental validation of a numerical approach previously proposed by the authors for the prediction of vibrations inside buildings due to railway traffic in tunnels. The numerical model is based on the concept of dynamic substructuring and is composed by three autonomous models to simulate the following main parts of the problem: i) generation of vibrations (train-track interaction); ii) propagation of vibrations (track-tunnel-ground system); iii) reception of vibrations (building coupled to the ground). The experimental validation consists in the comparison between the results predicted by the proposed numerical model and the measurements performed inside a building due to the railway traffic in a shallow tunnel located in Madrid. Apart from the brief description of the numerical model and of the case study, the main options and simplifications adopted on the numerical modeling strategy are discussed. The balance adopted between accuracy and simplicity of the numerical approach proved to be a path to follow in order to transfer knowledge to engineering practice. Finally, the comparison between numerical and experimental results allowed finding a good agreement between both, fact that ensures the ability of the proposed modeling strategy to deal with real engineering practical problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Rapid target gene validation in complex cancer mouse models using re-derived embryonic stem cells.

    Science.gov (United States)

    Huijbers, Ivo J; Bin Ali, Rahmen; Pritchard, Colin; Cozijnsen, Miranda; Kwon, Min-Chul; Proost, Natalie; Song, Ji-Ying; de Vries, Hilda; Badhai, Jitendra; Sutherland, Kate; Krimpenfort, Paul; Michalak, Ewa M; Jonkers, Jos; Berns, Anton

    2014-02-01

    Human cancers modeled in Genetically Engineered Mouse Models (GEMMs) can provide important mechanistic insights into the molecular basis of tumor development and enable testing of new intervention strategies. The inherent complexity of these models, with often multiple modified tumor suppressor genes and oncogenes, has hampered their use as preclinical models for validating cancer genes and drug targets. In our newly developed approach for the fast generation of tumor cohorts we have overcome this obstacle, as exemplified for three GEMMs; two lung cancer models and one mesothelioma model. Three elements are central for this system; (i) The efficient derivation of authentic Embryonic Stem Cells (ESCs) from established GEMMs, (ii) the routine introduction of transgenes of choice in these GEMM-ESCs by Flp recombinase-mediated integration and (iii) the direct use of the chimeric animals in tumor cohorts. By applying stringent quality controls, the GEMM-ESC approach proofs to be a reliable and effective method to speed up cancer gene assessment and target validation. As proof-of-principle, we demonstrate that MycL1 is a key driver gene in Small Cell Lung Cancer.

  19. Deviatoric constitutive model: domain of strain rate validity

    Energy Technology Data Exchange (ETDEWEB)

    Zocher, Marvin A [Los Alamos National Laboratory

    2009-01-01

    A case is made for using an enhanced methodology in determining the parameters that appear in a deviatoric constitutive model. Predictability rests on our ability to solve a properly posed initial boundary value problem (IBVP), which incorporates an accurate reflection of material constitutive behavior. That reflection is provided through the constitutive model. Moreover, the constitutive model is required for mathematical closure of the IBVP. Common practice in the shock physics community is to divide the Cauchy tensor into spherical and deviatoric parts, and to develop separate models for spherical and deviatoric constitutive response. Our focus shall be on the Cauchy deviator and deviatoric constitutive behavior. Discussions related to the spherical part of the Cauchy tensor are reserved for another time. A number of deviatoric constitutive models have been developed for utilization in the solution of IBVPs that are of interest to those working in the field of shock physics, e.g. All of these models are phenomenological and contain a number of parameters that must be determined in light of experimental data. The methodology employed in determining these parameters dictates the loading regime over which the model can be expected to be accurate. The focus of this paper is the methodology employed in determining model parameters and the consequences of that methodology as it relates to the domain of strain rate validity. We shall begin by describing the methodology that is typically employed. We shall discuss limitations imposed upon predictive capability by the typically employed methodology. We shall propose a modification to the typically employed methodology that significantly extends the domain of strain rate validity.

  20. Modeling, Robust Control, and Experimental Validation of a Supercavitating Vehicle

    Science.gov (United States)

    Escobar Sanabria, David

    This dissertation considers the mathematical modeling, control under uncertainty, and experimental validation of an underwater supercavitating vehicle. By traveling inside a gas cavity, a supercavitating vehicle reduces hydrodynamic drag, increases speed, and minimizes power consumption. The attainable speed and power efficiency make these vehicles attractive for undersea exploration, high-speed transportation, and defense. However, the benefits of traveling inside a cavity come with difficulties in controlling the vehicle dynamics. The main challenge is the nonlinear force that arises when the back-end of the vehicle pierces the cavity. This force, referred to as planing, leads to oscillatory motion and instability. Control technologies that are robust to planing and suited for practical implementation need to be developed. To enable these technologies, a low-order vehicle model that accounts for inaccuracy in the characterization of planing is required. Additionally, an experimental method to evaluate possible pitfalls in the models and controllers is necessary before undersea testing. The major contribution of this dissertation is a unified framework for mathematical modeling, robust control synthesis, and experimental validation of a supercavitating vehicle. First, we introduce affordable experimental methods for mathematical modeling and controller testing under planing and realistic flow conditions. Then, using experimental observations and physical principles, we create a low-order nonlinear model of the longitudinal vehicle motion. This model quantifies the planing uncertainty and is suitable for robust controller synthesis. Next, based on the vehicle model, we develop automated tools for synthesizing controllers that deliver a certificate of performance in the face of nonlinear and uncertain planing forces. We demonstrate theoretically and experimentally that the proposed controllers ensure higher performance when the uncertain planing dynamics are

  1. Validation of thermal models for a prototypical MEMS thermal actuator.

    Energy Technology Data Exchange (ETDEWEB)

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the

  2. Validation and Scenario Analysis of a Soil Organic Carbon Model

    Institute of Scientific and Technical Information of China (English)

    HUANG Yao; LIU Shi-liang; SHEN Qi-rong; ZONG Liang-gang; JIANG Ding-an; HUANG Hong-guang

    2002-01-01

    A model developed by the authors was validated against independent data sets. The data sets were obtained from field experiments of crop residue decomposition and a 7-year soil improvement in Yixing City, Jiangsu Province. Model validation indicated that soil organic carbon dynamics can be simulated from the weather variables of temperature, sunlight and precipitation, soil clay content and bulk density, grain yield of previous crops, qualities and quantities of the added organic matter. Model simulation in general agreed with the measurements. The comparison between computed and measured resulted in correlation coefficient γ2 values of 0.9291 * * * (n = 48) and 0. 6431 * * (n = 65) for the two experiments, respectively. Model prediction under three scenarios of no additional organic matter input, with an annual incorporation of rice and wheat straw at rates of 6.75t/ha and 9.0t/ha suggested that the soil organic carbon in Wanshi Township of Yixing City would be from an initial value of 7.85g/kg in 1983 to 6.30g/kg, 11.42g/kg and 13g/kg in 2014, respectively. Consequently, total nitrogen content of the soil was predicted to be respectively 0.49g/kg,0.89g/kg and 1.01g/kg under the three scenarios.

  3. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-08-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  4. A new validation-assessment tool for health-economic decision models

    NARCIS (Netherlands)

    Mauskopf, J.; Vemer, P.; Voorn, van G.A.K.; Corro Ramos, I.

    2014-01-01

    A validation-assessment tool is being developed for decision makers to transparently and consistently evaluate the validation status of different health-economic decision models. It is designed as a list of validation techniques covering all relevant aspects of model validation to be filled in by

  5. The Danish Barriers Questionnaire-II: preliminary validation in cancer pain patients

    DEFF Research Database (Denmark)

    Jacobsen, Ramune; Møldrup, Claus; Christrup, Lona Louring;

    2009-01-01

    of three items addressing the fear of getting tolerant to analgesic effect of pain medicine. Items related to medication side effects were analyzed as separate units. The DBQ-II total had an internal consistency of 0.87. The DBQ-II total score was related to measures of pain relief and anxiety. CONCLUSIONS...... management facilities. Thirty-three patients responded to the DBQ-II, Hospital Anxiety and Depression Scale, and Brief Pain Inventory pain severity scale. RESULTS: A factor analysis of the DBQ-II resulted in six scales. Scale one, Fatalism, consisted of three items addressing fatalistic beliefs regarding...... cancer pain management. Scale two, Immune System, consisted of three items addressing the belief that pain medications harm the immune system. Scale three, Monitor, consisted of three items addressing the fear that pain medicine masks changes in one's body. Scale four, Communication, consisted of five...

  6. Modelling of asymmetric nebulae. II. Line profiles

    CERN Document Server

    Morisset, C

    2006-01-01

    We present a tool, VELNEB_3D, which can be applied to the results of 3D photoionization codes to generate emission line profiles, position-velocity maps and 3D maps in any emission line by assuming an arbitrary velocity field. We give a few examples, based on our pseudo-3D photoionization code NEBU_3D (Morisset, Stasinska and Pena, 2005) which show the potentiality and usefulness of our tool. One example shows how complex line profiles can be obtained even with a simple expansion law if the nebula is bipolar and the slit slightly off-center. Another example shows different ways to produce line profiles that could be attributed to a turbulent velocity field while there is no turbulence in the model. A third example shows how, in certain circumstances, it is possible to discriminate between two very different geometrical structures -- here a face-on blister and its ``spherical impostor'' -- when using appropriate high resolution spectra. Finally, we show how our tool is able to generate 3D maps, similar to the ...

  7. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  8. A Qualitative Study on the Content Validity of the Social Capital Scales in the Copenhagen Psychosocial Questionnaire (COPSOQ II

    Directory of Open Access Journals (Sweden)

    Hanne Berthelsen

    2016-06-01

    Full Text Available The Copenhagen Psychosocial Questionnaire (COPSOQ II includes scales for measuring 'workplace social capital'. The overall aim of this article is to evaluate the content validity of the following scales: horizontal trust, vertical trust and justice based on data from cognitive interviews using a think-aloud procedure. Informants were selected to achieve variation in gender, age, region of residence, and occupation. A predetermined coding scheme was used to identify: 1 Perspective (reflection on behalf of oneself only or abstraction to a broader perspective, 2 Use of response options, 3 Contexts challenging the process of answering, and 4 Overall reflections included in the retrieval and judgement processes leading to an answer for each item. The results showed that 1 the intended shift from individual to a broader perspective worked for eight out of eleven items. 2 The response option balancing in the middle covered different meanings. Retrieval of information needed to answer constituted a problem in four out of eleven items. 3 Three contextually challenging situations were identified. 4 For most items the reflections corresponded well with the intention of the scales, though the items asking about withheld information caused more problems in answering and lower content validity compared to the other items of the scales. In general, the findings supported the content validity of the COPSOQ II measurement of workplace social capital as a group construct. The study opens for new insight into how concepts and questions are understood and answered among people coming from different occupations and organizational settings.

  9. MOLECULAR VALIDATED MODEL FOR ADSORPTION OF PROTONATED DYE ON LDH

    Directory of Open Access Journals (Sweden)

    B. M. Braga

    Full Text Available Abstract Hydrotalcite-like compounds are anionic clays of scientific and technological interest for their use as ion exchange materials, catalysts and modified electrodes. Surface phenomenon are important for all these applications. Although conventional analytical methods have enabled progress in understanding the behavior of anionic clays in solution, an evaluation at the atomic scale of the dynamics of their ionic interactions has never been performed. Molecular simulation has become an extremely useful tool to provide this perspective. Our purpose is to validate a simplified model for the adsorption of 5-benzoyl-4-hydroxy-2-methoxy-benzenesulfonic acid (MBSA, a prototype molecule of anionic dyes, onto a hydrotalcite surface. Monte Carlo simulations were performed in the canonical ensemble with MBSA ions and a pore model of hydrotalcite using UFF and ClayFF force fields. The proposed molecular model has allowed us to reproduce experimental data of atomic force microscopy. Influences of protonation during the adsorption process are also presented.

  10. Experimental Validation of a Dynamic Model for Lightweight Robots

    Directory of Open Access Journals (Sweden)

    Alessandro Gasparetto

    2013-03-01

    Full Text Available Nowadays, one of the main topics in robotics research is dynamic performance improvement by means of a lightening of the overall system structure. The effective motion and control of these lightweight robotic systems occurs with the use of suitable motion planning and control process. In order to do so, model-based approaches can be adopted by exploiting accurate dynamic models that take into account the inertial and elastic terms that are usually neglected in a heavy rigid link configuration. In this paper, an effective method for modelling spatial lightweight industrial robots based on an Equivalent Rigid Link System approach is considered from an experimental validation perspective. A dynamic simulator implementing the formulation is used and an experimental test-bench is set-up. Experimental tests are carried out with a benchmark L-shape mechanism.

  11. Criteria of validity for animal models of psychiatric disorders: focus on anxiety disorders and depression

    Directory of Open Access Journals (Sweden)

    Belzung Catherine

    2011-11-01

    Full Text Available Abstract Animal models of psychiatric disorders are usually discussed with regard to three criteria first elaborated by Willner; face, predictive and construct validity. Here, we draw the history of these concepts and then try to redraw and refine these criteria, using the framework of the diathesis model of depression that has been proposed by several authors. We thus propose a set of five major criteria (with sub-categories for some of them; homological validity (including species validity and strain validity, pathogenic validity (including ontopathogenic validity and triggering validity, mechanistic validity, face validity (including ethological and biomarker validity and predictive validity (including induction and remission validity. Homological validity requires that an adequate species and strain be chosen: considering species validity, primates will be considered to have a higher score than drosophila, and considering strains, a high stress reactivity in a strain scores higher than a low stress reactivity in another strain. Pathological validity corresponds to the fact that, in order to shape pathological characteristics, the organism has been manipulated both during the developmental period (for example, maternal separation: ontopathogenic validity and during adulthood (for example, stress: triggering validity. Mechanistic validity corresponds to the fact that the cognitive (for example, cognitive bias or biological mechanisms (such as dysfunction of the hormonal stress axis regulation underlying the disorder are identical in both humans and animals. Face validity corresponds to the observable behavioral (ethological validity or biological (biomarker validity outcomes: for example anhedonic behavior (ethological validity or elevated corticosterone (biomarker validity. Finally, predictive validity corresponds to the identity of the relationship between the triggering factor and the outcome (induction validity and between the effects of

  12. Dynamic validation of the Planck-LFI thermal model

    Energy Technology Data Exchange (ETDEWEB)

    Tomasi, M; Bersanelli, M; Mennella, A [Universita degli Studi di Milano, Via Celoria 16, 20133 Milano (Italy); Cappellini, B [INAF IASF Milano, Via Bassini, 15, 20133, Milano (Italy); Gregorio, A [University of Trieste, Department of Physics, via Valerio 2, 34127 Trieste (Italy); Colombo, F; Lapolla, M [Thales Alenia Space Italia S.p.A., IUEL - Scientific Instruments, S.S. Padana Superiore 290, 20090 Vimodrone (Mi) (Italy); Terenzi, L; Morgante, G; Butler, R C; Mandolesi, N; Valenziano, L [INAF IASF Bologna, via Gobetti 101, 40129 Bologna (Italy); Galeotta, S; Maris, M; Zacchei, A [LFI-DPC INAF-OATs, via Tiepolo 11, 34131 Trieste (Italy)

    2010-01-15

    The Low Frequency Instrument (LFI) is an array of cryogenically cooled radiometers on board the Planck satellite, designed to measure the temperature and polarization anisotropies of the cosmic microwave background (CMB) at 30, 44 and 70 GHz. The thermal requirements of the LFI, and in particular the stringent limits to acceptable thermal fluctuations in the 20 K focal plane, are a critical element to achieve the instrument scientific performance. Thermal tests were carried out as part of the on-ground calibration campaign at various stages of instrument integration. In this paper we describe the results and analysis of the tests on the LFI flight model (FM) performed at Thales Laboratories in Milan (Italy) during 2006, with the purpose of experimentally sampling the thermal transfer functions and consequently validating the numerical thermal model describing the dynamic response of the LFI focal plane. This model has been used extensively to assess the ability of LFI to achieve its scientific goals: its validation is therefore extremely important in the context of the Planck mission. Our analysis shows that the measured thermal properties of the instrument show a thermal damping level better than predicted, therefore further reducing the expected systematic effect induced in the LFI maps. We then propose an explanation of the increased damping in terms of non-ideal thermal contacts.

  13. The Atmospheric Radionuclide Transport Model (ARTM) - Validation of a long-term atmospheric dispersion model

    Science.gov (United States)

    Hettrich, Sebastian; Wildermuth, Hans; Strobl, Christopher; Wenig, Mark

    2016-04-01

    In the last couple of years, the Atmospheric Radionuclide Transport Model (ARTM) has been developed by the German Federal Office for Radiation Protection (BfS) and the Society for Plant and Reactor Security (GRS). ARTM is an atmospheric dispersion model for continuous long-term releases of radionuclides into the atmosphere, based on the Lagrangian particle model. This model, developed in the first place as a more realistic replacement for the out-dated Gaussian plume models, is currently being optimised for further scientific purposes to study atmospheric dispersion in short-range scenarios. It includes a diagnostic wind field model, allows for the application of building structures and multiple sources (including linear, 2-and 3-dimensional source geometries), and considers orography and surface roughness. As an output it calculates the activity concentration, dry and wet deposition and can model also the radioactive decay of Rn-222. As such, ARTM requires to undergo an intense validation process. While for short-term and short-range models, which were mainly developed for examining nuclear accidents or explosions, a few measurement data-sets are available for validation, data-sets for validating long-term models are very sparse and the existing ones mostly prove to be not applicable for validation. Here we present a strategy for the validation of long-term Lagrangian particle models based on the work with ARTM. In our validation study, the first part we present is a comprehensive analysis of the model sensitivities on different parameters like e.g. (simulation grid size resolution, starting random number, amount of simulation particles, etc.). This study provides a good estimation for the uncertainties of the simulation results and consequently can be used to generate model outputs comparable to the available measurements data at various distances from the emission source. This comparison between measurement data from selected scenarios and simulation results

  14. Radiative transfer model for contaminated slabs: experimental validations

    Science.gov (United States)

    Andrieu, F.; Schmidt, F.; Schmitt, B.; Douté, S.; Brissaud, O.

    2015-09-01

    This article presents a set of spectro-goniometric measurements of different water ice samples and the comparison with an approximated radiative transfer model. The experiments were done using the spectro-radiogoniometer described in Brissaud et al. (2004). The radiative transfer model assumes an isotropization of the flux after the second interface and is fully described in Andrieu et al. (2015). Two kinds of experiments were conducted. First, the specular spot was closely investigated, at high angular resolution, at the wavelength of 1.5 μm, where ice behaves as a very absorbing media. Second, the bidirectional reflectance was sampled at various geometries, including low phase angles on 61 wavelengths ranging from 0.8 to 2.0 μm. In order to validate the model, we made qualitative tests to demonstrate the relative isotropization of the flux. We also conducted quantitative assessments by using a Bayesian inversion method in order to estimate the parameters (e.g., sample thickness, surface roughness) from the radiative measurements only. A simple comparison between the retrieved parameters and the direct independent measurements allowed us to validate the model. We developed an innovative Bayesian inversion approach to quantitatively estimate the uncertainties in the parameters avoiding the usual slow Monte Carlo approach. First we built lookup tables, and then we searched the best fits and calculated a posteriori density probability functions. The results show that the model is able to reproduce the geometrical energy distribution in the specular spot, as well as the spectral behavior of water ice slabs. In addition, the different parameters of the model are compatible with independent measurements.

  15. A validation study of a stochastic model of human interaction

    Science.gov (United States)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree

  16. Validation & verification of a Bayesian network model for aircraft vulnerability

    CSIR Research Space (South Africa)

    Schietekat, Sunelle

    2016-09-01

    Full Text Available : Series B (Statistical Methodology), 50(2), pp. 157-224. 12th INCOSE SA Systems Engineering Conference ISBN 978-0-620-72719-8 Page 103 M&SCO. 2013. Verification, Validation, & Accreditation (VV&A) Recommended Practices Guide (RPG). Retrieved from U....S. DoD Modelling & Simulation Coordination Office. http://www.msco.mil/VVA_RPG.html (last accessed April 8, 2016). Pearl, J. 1988. Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann. Sargent, R. G. 1981...

  17. Beyond standard model report of working group II

    CERN Document Server

    Joshipura, A S; Joshipura, Anjan S; Roy, Probir

    1995-01-01

    Working group II at WHEPP3 concentrated on issues related to the supersymmetric standard model as well as SUSY GUTS and neutrino properties. The projects identified by various working groups as well as progress made in them since WHEPP3 are briefly reviewed.

  18. LANL*V2.0: global modeling and validation

    Directory of Open Access Journals (Sweden)

    S. Zaharia

    2011-08-01

    Full Text Available We describe in this paper the new version of LANL*, an artificial neural network (ANN for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1 we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2 The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005 (TS05 instead of the older model by Tsyganenko et al. (2003. We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* * V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  19. Numerical simulation and experimental validation of aircraft ground deicing model

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2016-05-01

    Full Text Available Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing process is depicted, and the dynamic model of the deicing process is provided based on the analysis of the deicing mechanism. In the dynamic model, the surface temperature of the deicing fluids and the ice thickness are regarded as the state parameters, while the fluid flow rate, the initial temperature, and the injection time of the deicing fluids are treated as control parameters. Ignoring the heat exchange between the deicing fluids and the environment, the simplified model is obtained. The rationality of the simplified model is verified by the numerical simulation and the impacts of the flow rate, the initial temperature and the injection time on the deicing process are investigated. To verify the model, the semi-physical experiment system is established, consisting of the low-constant temperature test chamber, the ice simulation system, the deicing fluid heating and spraying system, the simulated wing, the test sensors, and the computer measure and control system. The actual test data verify the validity of the dynamic model and the accuracy of the simulation analysis.

  20. Evaluation and cross-validation of Environmental Models

    Science.gov (United States)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  1. A Report on the Validation of Beryllium Strength Models

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    This report discusses work on validating beryllium strength models with flyer plate and Taylor rod experimental data. Strength models are calibrated with Hopkinson bar and quasi-static data. The Hopkinson bar data for beryllium provides strain rates up to about 4000 per second. A limitation of the Hopkinson bar data for beryllium is that it only provides information on strain up to about 0.15. The lack of high strain data at high strain rates makes it difficult to distinguish between various strength model settings. The PTW model has been calibrated many different times over the last 12 years. The lack of high strain data for high strain rates has resulted in these calibrated PTW models for beryllium exhibiting significantly different behavior when extrapolated to high strain. For beryllium, the α parameter of PTW has recently been calibrated to high precision shear modulus data. In the past the α value for beryllium was set based on expert judgment. The new α value for beryllium was used in a calibration of the beryllium PTW model by Sky Sjue. The calibration by Sjue used EOS table information to model the temperature dependence of the heat capacity. Also, the calibration by Sjue used EOS table information to model the density changes of the beryllium sample during the Hopkinson bar and quasi-static experiments. In this paper, the calibrated PTW model by Sjue is compared against experimental data and other strength models. The other strength models being considered are a PTW model calibrated by Shuh- Rong Chen and a Steinberg-Guinan type model by John Pedicini. The three strength models are used in a comparison against flyer plate and Taylor rod data. The results show that the Chen PTW model provides better agreement to this data. The Chen PTW model settings have been previously adjusted to provide a better fit to flyer plate data, whereas the Sjue PTW model has not been changed based on flyer plate data. However, the Sjue model provides a reasonable fit to

  2. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  3. Experimental validation of a numerical model for subway induced vibrations

    Science.gov (United States)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  4. Mental health professionals' attitudes to partnership in medicine taking: a validation study of the Leeds Attitude to Concordance Scale II.

    Science.gov (United States)

    de Las Cuevas, Carlos; Rivero-Santana, Amado; Perestelo-Perez, Lilisbeth; Perez-Ramos, Jeanette; Gonzalez-Lorenzo, Marien; Serrano-Aguilar, Pedro; Sanz, Emilio J

    2012-02-01

    To explore psychiatrists' attitudes toward concordance by validating the Leeds Attitude to Concordance Scale II (LATCon II) in a Spanish sample. This was a cross-sectional survey. An opportunistic sample of 125 psychiatrist and 100 psychiatry registrars attending a national conference completed the LATCon II questionnaire and sociodemographic and professional data. The principal component analysis of the LATCon II items was performed. Associations with sociodemographic and mental health professional variables were calculated. Principal component analysis yielded three components labeled "communication/empathy," "shared control," and "eventual paternalistic style." Women obtained significantly lower scores than men on the second component. Mental health professional variables were not related to attitude to concordance. Psychiatrists show a favorable attitude to involve patients in a process of reciprocal communication, where patients' preferences, values, and expectations are considered, but they are more cautious in their attitude to sharing decisions with patients. There is scope for the different kinds of research in this area: studying sex-based differences in psychiatrists' attitudes to concordance and also exploring the gap in mental health care between patients' and professionals' views of shared decision making. Only in this way can the real partnership for shared decision making be fully understood. Copyright © 2011 John Wiley & Sons, Ltd.

  5. The hydrodynamical models of the cometary compact H II region

    CERN Document Server

    Zhu, Feng-Yao; Li, Juan; Zhang, Jiang-Shui; Wang, Jun-Zhi

    2015-01-01

    We have developed a full numerical method to study the gas dynamics of cometary ultra-compact (UC) H II regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a $40.9/21.9 M_\\odot$ star are simulated. In the bow-shock models, the massive star is assumed to move through dense ($n=8000~cm^{-3}$) molecular material with a stellar velocity of $15~km~s^{-1}$. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne II] 12.81\\mum and $H_2~S(2)$ lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell is mainly due to the expansion of the H II region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion...

  6. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  7. Low Altitude Validation of Geomagnetic Cutoff Models Using SAMPEX Data

    Science.gov (United States)

    Young, S. L.; Kress, B. T.

    2011-12-01

    Single event upsets (SEUs) caused by MeV protons are a concern for satellite operators so AFRL is working to create a tool that can specify and/or forecast SEU probabilities. An important component of the tool's SEU probability calculation will be the local energetic ion spectrum. The portion of that spectrum due to trapped energetic ion population is relatively stable and predictable; however it is more difficult to account for the transient solar energetic particles (SEPs). These particles, which can be ejected from the solar atmosphere during a solar flare or filament eruption or can be energized by coronal mass ejection (CME) driven shocks, can penetrate the Earth's magnetosphere into regions not normally populated by energetic protons. The magnetosphere will provide energy dependent shielding that also depends on its magnetic configuration. During magnetic storms that configuration is modified and the SEP cutoff latitude for a given particle energy can be suppressed up to ~15 degrees equatorward exposing normally shielded regions. As a first step to creating the satellite SEU prediction tool, we are comparing the Smart et al. (Advances in Space Research, 2006) and CISM-Dartmouth (Kress et al., Space Weather, 2010) geomagnetic cutoff tools. While they have provided some of their own validations in the noted papers, our validation will be done consistently between models allowing us to better compare the models.

  8. A geomagnetically induced current warning system: model development and validation

    Science.gov (United States)

    McKay, A.; Clarke, E.; Reay, S.; Thomson, A.

    Geomagnetically Induced Currents (GIC), which can flow in technological systems at the Earth's surface, are a consequence of magnetic storms and Space Weather. A well-documented practical problem for the power transmission industry is that GIC can affect the lifetime and performance of transformers within the power grid. Operational mitigation is widely considered to be one of the best strategies to manage the Space Weather and GIC risk. Therefore in the UK a magnetic storm warning and GIC monitoring and analysis programme has been under development by the British Geological Survey and Scottish Power plc (the power grid operator for Central Scotland) since 1999. Under the auspices of the European Space Agency's service development activities BGS is developing the capability to meet two key user needs that have been identified. These needs are, firstly, the development of a near real-time solar wind shock/ geomagnetic storm warning, based on L1 solar wind data and, secondly, the development of an integrated surface geo-electric field and power grid network model that should allow prediction of GIC throughout the power grid in near real time. While the final goal is a `seamless package', the components of the package utilise diverse scientific techniques. We review progress to date with particular regard to the validation of the individual components of the package. The Scottish power grid response to the October 2003 magnetic storms is also discussed and model and validation data are presented.

  9. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  10. Physiologically Based Modelling of Dioxins. I. Validation of a rodent toxicokinetic model

    NARCIS (Netherlands)

    Zeilmaker MJ; Slob W

    1993-01-01

    In this report a rodent Physiologically Based PharmacoKinetic (PBPK) model for 2,3,7,8-tetrachlorodibenzodioxin is described. Validation studies, in which model simulations of TCDD disposition were compared with in vivo TCDD disposition in rodents exposed to TCDD, showed that the model adequately p

  11. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    Energy Technology Data Exchange (ETDEWEB)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  12. Validation of full cavitation model in cryogenic fluids

    Institute of Scientific and Technical Information of China (English)

    CAO XiaoLi; ZHANG XiaoBin; QIU LiMin; GAN ZhiHua

    2009-01-01

    Numerical simulation of cavitation in cryogenic fluids is important in improving the stable operation of he propulsion system in liquid-fuel rocket. It also represents a broader class of problems where the fluid is operating close to its critical point and the thermal effects of cavitation are pronounced. The present article focuses on simulating cryogenic cavitation by implementing the "full cavitation model", coupled with energy equation, in conjunction with iteraUve update of the real fluid properties at local temperatures. Steady state computations are then conducted on hydrofoil and ogive in liquid nitrogen and hydrogen respectively, based on which we explore the mechanism of cavitation with thermal ef-fects. Comprehensive comparisons between the simulation results and experimental data as well as previous computations by other researchers validate the full cavitation model in cryogenic fluids. The sensitivity of cavity length to cavitation number is also examined.

  13. Modelling and validation of multiple reflections for enhanced laser welding

    Science.gov (United States)

    Milewski, J.; Sklar, E.

    1996-05-01

    The effects of multiple internal reflections within a laser weld joint as functions of joint geometry and processing conditions have been characterized. A computer-based ray tracing model is used to predict the reflective propagation of laser beam energy focused into the narrow gap of a metal joint for the purpose of predicting the location of melting and coalescence to form a weld. Quantitative comparisons are made between simulation cases. Experimental results are provided for qualitative model validation. This method is proposed as a way to enhance process efficiency and design laser welds which display deep penetration and high depth-to-width aspect ratios without high powered systems or keyhole mode melting.

  14. Assessing uncertainty in pollutant wash-off modelling via model validation.

    Science.gov (United States)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2014-11-01

    Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies.

  15. Satellite information of sea ice for model validation

    Science.gov (United States)

    Saheed, P. P.; Mitra, Ashis K.; Momin, Imranali M.; Mahapatra, Debasis K.; Rajagopal, E. N.

    2016-05-01

    Emergence of extensively large computational facilities have enabled the scientific world to use earth system models for understating the prevailing dynamics of the earth's atmosphere, ocean and cryosphere and their inter relations. The sea ice in the arctic and the Antarctic has been identified as one of the main proxies to study the climate changes. The rapid sea-ice melting in the Arctic and disappearance of multi-year sea ice has become a matter of concern. The earth system models couple the ocean, atmosphere and sea-ice in order to bring out the possible inter connections between these three very important components and their role in the changing climate. The Indian monsoon is seen to be subjected to nonlinear changes in the recent years. The rapid ice melt in the Arctic sea ice is apparently linked to the changes in the weather and climate of the Indian subcontinent. The recent findings reveal the relation between the high events occurs in the Indian subcontinent and the Arctic sea ice melt episodes. The coupled models are being used in order to study the depth of these relations. However, the models have to be validated extensively by using measured parameters. The satellite measurements of sea-ice starts from way back in 1979. There have been many data sets available since then. Here in this study, an evaluation of the existing data sets is conducted. There are some uncertainties in these data sets. It could be associated with the absence of a single sensor for a long period of time and also the absence of accurate in-situ measurements in order to validate the satellite measurements.

  16. Development and validation of a liquid composite molding model

    Science.gov (United States)

    Bayldon, John Michael

    2007-12-01

    In composite manufacturing, Vacuum Assisted Resin Transfer Molding (VARTM) is becoming increasingly important as a cost effective manufacturing method of structural composites. In this process the dry preform (reinforcement) is placed on a rigid tool and covered by a flexible film to form an airtight vacuum bag. Liquid resin is drawn under vacuum through the preform inside the vacuum bag. Modeling of this process relies on a good understanding of closely coupled phenomena. The resin flow depends on the preform permeability, which in turn depends on the local fluid pressure and the preform compaction behavior. VARTM models for predicting the flow rate in this process do exist, however, they are not able to properly predict the flow for all classes of reinforcement material. In this thesis, the continuity equation used in VARTM models is reexamined and a modified form proposed. In addition, the compaction behavior of the preform in both saturated and dry states is studied in detail and new models are proposed for the compaction behavior. To assess the validity of the proposed models, the shadow moire method was adapted and used to perform full field measurement of the preform thickness during infusion, in addition to the usual measurements of flow front position. A new method was developed and evaluated for the analysis of the moire data related to the VARTM process, however, the method has wider applicability to other full field thickness measurements. The use of this measurement method demonstrated that although the new compaction models work well in the characterization tests, they do not properly describe all the preform features required for modeling the process. In particular the effect of varying saturation on the preform's behavior requires additional study. The flow models developed did, however, improve the prediction of the flow rate for the more compliant preform material tested, and the experimental techniques have shown where additional test methods

  17. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  18. The Internal Validation of Level II and Level III Respiratory Therapy Examinations. Final Report.

    Science.gov (United States)

    Jouett, Michael L.

    This project began with the delineation of the roles and functions of respiratory therapy personnel by the American Association for Respiratory Therapy. In Phase II, The Psychological Corporation used this delineation to develop six proficiency examinations, three at each of two levels. One exam at each level was designated for the purpose of the…

  19. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    Science.gov (United States)

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  20. External validation of Acute Physiology and Chronic Health Evaluation IV in Dutch intensive care units and comparison with Acute Physiology and Chronic Health Evaluation II and Simplified Acute Physiology Score II

    NARCIS (Netherlands)

    S. Brinkman; F. Bakhshi-Raiez; A. Abu-Hanna; E. de Jonge; R.J. Bosman; L. Peelen; N.F. de Keizer

    2011-01-01

    Purpose: The aim of this study was to validate and compare the performance of the Acute Physiology and Chronic Health Evaluation (APACHE) IV in the Dutch intensive care unit (ICU) population to the APACHE II and Simplified Acute Physiology Score (SAPS) II. Materials and Methods: This is a prospectiv

  1. Synthesis, Spectral Characterization, Molecular Modeling, and Antimicrobial Studies of Cu(II, Ni(II, Co(II, Mn(II, and Zn(II Complexes of ONO Schiff Base

    Directory of Open Access Journals (Sweden)

    Padmaja Mendu

    2012-01-01

    Full Text Available A series of Cu(II, Ni(II, Co(II, Mn(II, and Zn(II complexes have been synthesized from the schiff base ligand L. The schiff base ligand [(4-oxo-4H-chromen-3-yl methylene] benzohydrazide (L has been synthesized by the reaction between chromone-3-carbaldehyde and benzoyl hydrazine. The nature of bonding and geometry of the transition metal complexes as well as schiff base ligand L have been deduced from elemental analysis, FT-IR, UV-Vis, 1HNMR, ESR spectral studies, mass, thermal (TGA and DTA analysis, magnetic susceptibility, and molar conductance measurements. Cu(II, Ni(II, Co(II, and Mn(II metal ions are forming 1:2 (M:L complexes, Zn(II is forming 1:1 (M:L complex. Based on elemental, conductance and spectral studies, six-coordinated geometry was assigned for Cu(II, Ni(II, Co(II, Mn(II, and Zn(II complexes. The complexes are 1:2 electrolytes in DMSO except zinc complex, which is neutral in DMSO. The ligand L acts as tridentate and coordinates through nitrogen atom of azomethine group, oxygen atom of keto group of γ-pyrone ring and oxygen atom of hydrazoic group of benzoyl hydrazine. The 3D molecular modeling and energies of all the compounds are furnished. The biological activity of the ligand and its complexes have been studied on the four bacteria E. coli, Edwardella, Pseudomonas, and B. subtilis and two fungi pencillium and tricoderma by well disc and fusion method and found that the metal chelates are more active than the free schiff base ligand.

  2. Modeling for Optimal Control : A Validated Diesel-Electric Powertrain Model

    OpenAIRE

    Sivertsson, Martin; Eriksson, Lars

    2014-01-01

    An optimal control ready model of a diesel-electric powertrain is developed,validated and provided to the research community. The aim ofthe model is to facilitate studies of the transient control of diesel-electricpowertrains and also to provide a model for developers of optimizationtools. The resulting model is a four state three control mean valueengine model that captures the significant nonlinearity of the diesel engine, while still being continuously differentiable.

  3. Nonlinear dispersion effects in elastic plates: numerical modelling and validation

    Science.gov (United States)

    Kijanka, Piotr; Radecki, Rafal; Packo, Pawel; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.

    2017-04-01

    Nonlinear features of elastic wave propagation have attracted significant attention recently. The particular interest herein relates to complex wave-structure interactions, which provide potential new opportunities for feature discovery and identification in a variety of applications. Due to significant complexity associated with wave propagation in nonlinear media, numerical modeling and simulations are employed to facilitate design and development of new measurement, monitoring and characterization systems. However, since very high spatio- temporal accuracy of numerical models is required, it is critical to evaluate their spectral properties and tune discretization parameters for compromise between accuracy and calculation time. Moreover, nonlinearities in structures give rise to various effects that are not present in linear systems, e.g. wave-wave interactions, higher harmonics generation, synchronism and | recently reported | shifts to dispersion characteristics. This paper discusses local computational model based on a new HYBRID approach for wave propagation in nonlinear media. The proposed approach combines advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE). The methods are investigated in the context of their accuracy for predicting nonlinear wavefields, in particular shifts to dispersion characteristics for finite amplitude waves and secondary wavefields. The results are validated against Finite Element (FE) calculations for guided waves in copper plate. Critical modes i.e., modes determining accuracy of a model at given excitation frequency - are identified and guidelines for numerical model parameters are proposed.

  4. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    Science.gov (United States)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2016-10-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  5. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    Science.gov (United States)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  6. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  7. Drilling forces model for lunar regolith exploration and experimental validation

    Science.gov (United States)

    Zhang, Tao; Ding, Xilun

    2017-02-01

    China's Chang'e lunar exploration project aims to sample and return lunar regolith samples at a minimum penetration depth of 2 m in 2017. Unlike such tasks on the Earth, automated drilling and sampling missions on the Moon are more complicated. Therefore, a delicately designed drill tool is required to minimize operational cost and enhance reliability. Penetration force and rotational torque are two critical parameters in designing the drill tool. In this paper, a novel numerical model for predicting penetration force and rotational torque in the drilling of lunar regolith is proposed. The model is based on quasi-static Mohr-Coulomb soil mechanics and explicitly describes the interaction between drill tool and lunar regolith. Geometric features of drill tool, mechanical properties of lunar regolith, and drilling parameters are taken into consideration in the model. Consequently, a drilling test bed was developed, and experimental penetration force and rotational torque were obtained in penetrating a lunar regolith simulant with different drilling parameters. Finally, theoretical and experimental results were compared to validate the proposed model. Experimental results indicated that the numerical model had good accuracy and was effective in predicting the penetration force and rotational torque in drilling the lunar regolith simulant.

  8. Validation of DWPF Melter Off-Gas Combustion Model

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A.S.

    2000-08-23

    The empirical melter off-gas combustion model currently used in the DWPF safety basis calculations is valid at melter vapor space temperatures above 570 degrees C, as measured in the thermowell. This lower temperature bound coincides with that of the off-gas data used as the basis of the model. In this study, the applicability of the empirical model in a wider temperature range was assessed using the off-gas data collected during two small-scale research melter runs. The first data set came from the Small Cylindrical Melter-2 run in 1985 with the sludge feed coupled with the precipitate hydrolysis product. The second data set came from the 774-A melter run in 1996 with the sludge-only feed prepared with the modified acid addition strategy during the feed pretreatment step. The results of the assessment showed that the data from these two melter runs agreed well with the existing model, and further provided the basis for extending the lower temperature bound of the model to the measured melter vapor space temperature of 445 degrees C.

  9. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of

  10. First approximations in avalanche model validations using seismic information

    Science.gov (United States)

    Roig Lafon, Pere; Suriñach, Emma; Bartelt, Perry; Pérez-Guillén, Cristina; Tapia, Mar; Sovilla, Betty

    2017-04-01

    Avalanche dynamics modelling is an essential tool for snow hazard management. Scenario based numerical modelling provides quantitative arguments for decision-making. The software tool RAMMS (WSL Institute for Snow and Avalanche Research SLF) is one such tool, often used by government authorities and geotechnical offices. As avalanche models improve, the quality of the numerical results will depend increasingly on user experience on the specification of input (e.g. release and entrainment volumes, secondary releases, snow temperature and quality). New model developments must continue to be validated using real phenomena data, for improving performance and reliability. The avalanches group form University of Barcelona (RISKNAT - UB), has studied the seismic signals generated from avalanches since 1994. Presently, the group manages the seismic installation at SLF's Vallée de la Sionne experimental site (VDLS). At VDLS the recorded seismic signals can be correlated to other avalanche measurement techniques, including both advanced remote sensing methods (radars, videogrammetry) and obstacle based sensors (pressure, capacitance, optical sender-reflector barriers). This comparison between different measurement techniques allows the group to address the question if seismic analysis can be used alone, on more additional avalanche tracks, to gain insight and validate numerical avalanche dynamics models in different terrain conditions. In this study, we aim to add the seismic data as an external record of the phenomena, able to validate RAMMS models. The seismic sensors are considerable easy and cheaper to install than other physical measuring tools, and are able to record data from the phenomena in every atmospheric conditions (e.g. bad weather, low light, freezing make photography, and other kind of sensors not usable). With seismic signals, we record the temporal evolution of the inner and denser parts of the avalanche. We are able to recognize the approximate position

  11. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    Science.gov (United States)

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  12. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    Science.gov (United States)

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  13. Case study for model validation : assessing a model for thermal decomposition of polyurethane foam.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Leslie, Ian H. (New Mexico State University, Las Cruces, NM); Hobbs, Michael L.; Rutherford, Brian Milne; Hills, Richard Guy (New Mexico State University, Las Cruces, NM); Pilch, Martin M.

    2004-10-01

    A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

  14. GEOCHEMICAL RECOGNITION OF SPILLED SEDIMENTS USED IN NUMERICAL MODEL VALIDATION

    Institute of Scientific and Technical Information of China (English)

    Jens R.VALEUR; Steen LOMHOLT; Christian KNUDSEN

    2004-01-01

    A fixed link (tunnel and bridge,in total 16 km) was constructed between Sweden and Denmark during 1995-2000.As part of the work,approximately 16 million tonnes of seabed materials (limestone and clay till) were dredged,and about 0.6 million tonnes of these were spilled in the water.Modelling of the spreading and sedimentation of the spilled sediments took place as part of the environmental monitoring of the construction activities.In order to verify the results of the numerical modelling of sediment spreading and sedimentation,a new method with the purpose of distinguishing between the spilled sediments and the naturally occurring sediments was developed.Because the spilled sediments tend to accumulate at the seabed in areas with natural sediments of the same size,it is difficult to separate these based purely on the physical properties.The new method is based on the geo-chemical differences between the natural sediment in the area and the spill.The basic properties used are the higher content of calcium carbonate material in the spill as compared to the natural sediments and the higher Ca/Sr ratio in the spill compared to shell fragments dominating the natural calcium carbonate deposition in the area.The reason for these differences is that carbonate derived from recent shell debris can be discriminated from Danien limestone,which is the material in which the majority of the dredging took place,on the basis of the Ca/Sr ratio being 488 in Danien Limestone and 237 in shell debris.The geochemical recognition of the origin of the sediments proved useful in separating the spilled from the naturally occurring sediments.Without this separation,validation of the modelling of accumulation of spilled sediments would not have been possible.The method has general validity and can be used in many situations where the origin ora given sediment is sought.

  15. Development and validation of a realistic head model for EEG

    Science.gov (United States)

    Bangera, Nitin Bhalchandra

    The utility of extracranial electrical or magnetic field recordings (EEG or MEG) is greatly enhanced if the generators of the bioelectromagnetic fields can be determined accurately from the measured fields. This procedure, known as the 'inverse method,' depends critically on calculations of the projection from generators in the brain to the EEG and MEG sensors. Improving and validating this calculation, known as the 'forward solution,' is the focus of this dissertation. The improvements involve more accurate modeling of the structures of the brain and thus understanding how current flows within the brain as a result of addition of structures in the forward model. Validation compares calculations using different forward models to the experimental results obtained by stimulating with implanted dipole electrodes. The human brain tissue displays inhomogeneity in electrical conductivity and also displays anisotropy, notably in the skull and brain white matter. In this dissertation, a realistic head model has been implemented using the finite element method to calculate the effects of inhomogeneity and anisotropy in the human brain. Accurate segmentation of the brain tissue type is implemented using a semi-automatic method to segment multimodal imaging data from multi-spectral MRI scans (different flip angles) in conjunction with the regular T1-weighted scans and computed x-ray tomography images. The electrical conductivity in the anisotropic white matter tissue is quantified from diffusion tensor MRI. The finite element model is constructed using AMIRA, a commercial segmentation and visualization tool and solved using ABAQUS, a commercial finite element solver. The model is validated using experimental data collected from intracranial stimulation in medically intractable epileptic patients. Depth electrodes are implanted in medically intractable epileptic patients in order to direct surgical therapy when the foci cannot be localized with the scalp EEG. These patients

  16. Modelling and validation of spectral reflectance for the colon

    Science.gov (United States)

    Hidovic-Rowe, Dzena; Claridge, Ela

    2005-03-01

    The spectral reflectance of the colon is known to be affected by malignant and pre-malignant changes in the tissue. As part of long-term research on the derivation of diagnostically important parameters characterizing colon histology, we have investigated the effects of the normal histological variability on the remitted spectra. This paper presents a detailed optical model of the normal colon comprising mucosa, submucosa and the smooth muscle layer. Each layer is characterized by five variable histological parameters: the volume fraction of blood, the haemoglobin saturation, the size of the scattering particles, including collagen, the volume fraction of the scattering particles and the layer thickness, and three optical parameters: the anisotropy factor, the refractive index of the medium and the refractive index of the scattering particles. The paper specifies the parameter ranges corresponding to normal colon tissue, including some previously unpublished ones. Diffuse reflectance spectra were modelled using the Monte Carlo method. Validation of the model-generated spectra against measured spectra demonstrated that good correspondence was achieved between the two. The analysis of the effect of the individual histological parameters on the behaviour of the spectra has shown that the spectral variability originates mainly from changes in the mucosa. However, the submucosa and the muscle layer must be included in the model as they have a significant constant effect on the spectral reflectance above 600 nm. The nature of variations in the spectra also suggests that it may be possible to carry out model inversion and to recover parameters characterizing the colon from multi-spectral images. A preliminary study, in which the mucosal blood and collagen parameters were modified to reflect histopathological changes associated with colon cancer, has shown that the spectra predicted by our model resemble measured spectral reflectance of adenocarcinomas. This suggests that

  17. PIV validation of blood-heart valve leaflet interaction modelling.

    Science.gov (United States)

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated.

  18. Modeling simulation and experimental validation for mold filling process

    Institute of Scientific and Technical Information of China (English)

    HOU Hua; JU Dong-ying; MAO Hong-kui; D. SAITO

    2006-01-01

    Based on the continuum equation, momentum conservation and energy conservation equations, the numerical model of turbulent flow filling was introduced; the 3-D free surface vof method was improved. Whether or not the numerical simulation results are reasonable, it needs corresponding experimental validations. General experimental techniques for casting fluid flow process include: thermocouple tracking location method, hydraulic simulating method, heat-resistant glass window method and X-ray observation etc. The hydraulic analogue experiment with DPIV technique is arranged to validate the fluent flow program for low-pressure casting with 0.1×105 Pa and 0.6×105 Pa pressure visually. By comparing the flow head, liquid surface, flow velocity, it is found that the filling pressure value influences the flow state strongly. With the increase of the filling pressure, the fluid flow state becomes unstable, the flow head becomes higher, and the filling time is reduced. The simulated results are accordant with the observed results approximately, which can prove the reasonability of our numerical program for filling process further.

  19. Validation of ice loads predicted from meteorological models

    Energy Technology Data Exchange (ETDEWEB)

    Veal, A.; Skea, A. [UK Met Office, Exeter, England (United Kingdom); Wareing, B. [Brian Wareing Tech Ltd., England (United Kingdom)

    2005-07-01

    Results of a field trial conducted on 2 Gerber PVM-100 instruments at Deadwater Fell test site in the United Kingdom were presented. The trials were conducted to assess whether the instruments were capable of measuring the liquid water content of the air, as well as to validate an ice model in terms of accretion rates on different sized conductors. Ambient air temperature, wind speed and direction were monitored at the Deadwater Fell weather station along with load cell values. Time lapse video recorders and a web camera system were used to view the performance of the conductors in varying weather conditions. All data was collected and stored at the site. It was anticipated that output from the instruments could be related to the conditions under which overhead line conductors suffer from ice loads, and help to revise weather maps which have proved to be incompatible with utility experience and the lifetimes achieved by overhead line designs. The data provided from the Deadwater work included logged data from the Gerbers, weather data and load data from a 10 mm diameter aluminium alloy conductor. When the combination of temperature, wind direction and Gerber output indicated icing conditions, they were confirmed by the conductor's load cell data. The tests confirmed the validity of the Gerber instruments to predict the occurrence of icing conditions, when combined with other meteorological data. It was concluded that the instruments may aid in optimized prediction methods for ice loads and icing events. 2 refs., 4 figs.

  20. Validation of elastic cross section models for space radiation applications

    Science.gov (United States)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Ford, W. P.; Maung, K. M.

    2017-02-01

    The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.

  1. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  2. Development and validation of a railgun hydrogen pellet injector model

    Energy Technology Data Exchange (ETDEWEB)

    King, T.L. [Univ. of Houston, TX (United States). Dept. of Electrical and Computer Engineering; Zhang, J.; Kim, K. [Univ. of Illinois, Urbana, IL (United States). Dept. of Electrical and Computer Engineering

    1995-12-31

    A railgun hydrogen pellet injector model is presented and its predictions are compared with the experimental data. High-speed hydrogenic ice injection is the dominant refueling method for magnetically confined plasmas used in controlled thermonuclear fusion research. As experimental devices approach the scale of power-producing fusion reactors, the fueling requirements become increasingly more difficult to meet since, due to the large size and the high electron densities and temperatures of the plasma, hypervelocity pellets of a substantial size will need to be injected into the plasma continuously and at high repetition rates. Advanced technologies, such as the railgun pellet injector, are being developed to address this demand. Despite the apparent potential of electromagnetic launchers to produce hypervelocity projectiles, physical effects that were neither anticipated nor well understood have made it difficult to realize this potential. Therefore, it is essential to understand not only the theory behind railgun operation, but the primary loss mechanisms, as well. Analytic tools have been used by many researchers to design and optimize railguns and analyze their performance. This has led to a greater understanding of railgun behavior and opened the door for further improvement. A railgun hydrogen pellet injector model has been developed. The model is based upon a pellet equation of motion that accounts for the dominant loss mechanisms, inertial and viscous drag. The model has been validated using railgun pellet injectors developed by the Fusion Technology Research Laboratory at the University of Illinois at Urbana-Champaign.

  3. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  4. Development, validation and application of numerical space environment models

    Science.gov (United States)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the

  5. A validation study of lawless expectancy model on low-level rural black workers

    Directory of Open Access Journals (Sweden)

    N. A. Edwards

    1986-04-01

    Full Text Available The object of this research is to validate the expectancy theory model propogated by Lawler(1971; 1973 on low level rural Black shop assistants in the Republic of Transkei. The criterion measure, performance, is measured by a performance appraisal instrument developed by the NIPR and the expectancy theory components by means of a translated version of the instrument which appears in the Michigan Organizational Assessment Package Part II. For a sample of 183 shop assistants from 10 organizations, the instrument yielded reliability coefficients ranging from 0,72 to 0,84. Evidence of validity was obtained by means of correlational analysis. A multiple correlation coefficient of R² = 0,29 was obtained. Further evidence of validity was found by means of maximum likelihood path analytic procedures. Opsomming Die geldigheid van Lawler se verwagtingsteorie vir landelike Swart winkelassistente in die Republiekvan Transkei word in die studie ondersoek. Die komponente van die verwagtingsteorie is gemeet deur 'n vertaalde weergawe van die Michigan Organizational Assessment Package (Part II, terwyl die kriterium, werksprestasie, beoordeel is deur middel van 'n prestasiebeoordelingsvraelys ontwikkel deur die NIPN. Met 'n steekproef van 183 winkelassistente uit 10 organisasies het die motiveringsvraelys betroubaarheidskoeffisiente gelewer van tussen 0,72 en 0,84. Met uitsondering van die E→ P komponent was al die korrelasies met die kriterium beduidend op die l%-peil en het gevarieer tussen 0,24 en 0,26 met 'n gekwadreerde meervoudige korrelasiekoeffisient van 0,29. Genoeg positiewe bewyse is in die studie verkry om met redelike sekerheid te aanvaar dat die verwagtingsteorie van toepassing gemaak kan word op lae-vlak landelike Swart workers.

  6. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    Energy Technology Data Exchange (ETDEWEB)

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  7. Theoretical models for Type I and Type II supernova

    Energy Technology Data Exchange (ETDEWEB)

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.

  8. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    Science.gov (United States)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS"® enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  9. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    Science.gov (United States)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  10. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    Science.gov (United States)

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product.

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  12. Validation of the dermal exposure model in ECETOC TRA.

    Science.gov (United States)

    Marquart, Hans; Franken, Remy; Goede, Henk; Fransman, Wouter; Schinkel, Jody

    2017-08-01

    The ECETOC TRA model (presently version 3.1) is often used to estimate worker inhalation and dermal exposure in regulatory risk assessment. The dermal model in ECETOC TRA has not yet been validated by comparison with independent measured exposure levels. This was the goal of the present study. Measured exposure levels and relevant contextual information were gathered via literature search, websites of relevant occupational health institutes and direct requests for data to industry. Exposure data were clustered in so-called exposure cases, which are sets of data from one data source that are expected to have the same values for input parameters in the ECETOC TRA dermal exposure model. For each exposure case, the 75th percentile of measured values was calculated, because the model intends to estimate these values. The input values for the parameters in ECETOC TRA were assigned by an expert elicitation and consensus building process, based on descriptions of relevant contextual information.From more than 35 data sources, 106 useful exposure cases were derived, that were used for direct comparison with the model estimates. The exposure cases covered a large part of the ECETOC TRA dermal exposure model. The model explained 37% of the variance in the 75th percentiles of measured values. In around 80% of the exposure cases, the model estimate was higher than the 75th percentile of measured values. In the remaining exposure cases, the model estimate may not be sufficiently conservative.The model was shown to have a clear bias towards (severe) overestimation of dermal exposure at low measured exposure values, while all cases of apparent underestimation by the ECETOC TRA dermal exposure model occurred at high measured exposure values. This can be partly explained by a built-in bias in the effect of concentration of substance in product used, duration of exposure and the use of protective gloves in the model. The effect of protective gloves was calculated to be on average a

  13. Radiative transfer model for contaminated slabs : experimental validations

    CERN Document Server

    Andrieu, François; Schmitt, Bernard; Douté, Sylvain; Brissaud, Olivier

    2015-01-01

    This article presents a set of spectro-goniometric measurements of different water ice samples and the comparison with an approximated radiative transfer model. The experiments were done using the spectro-radiogoniometer described in Brissaud et al. (2004). The radiative transfer model assumes an isotropization of the flux after the second interface and is fully described in Andrieu et al. (2015). Two kind of experiments were conducted. First, the specular spot was closely investigated, at high angular resolution, at the wavelength of $1.5\\,\\mbox{\\mu m}$, where ice behaves as a very absorbing media. Second, the bidirectional reflectance was sampled at various geometries, including low phase angles on 61 wavelengths ranging from $0.8\\,\\mbox{\\mu m}$ to $2.0\\,\\mbox{\\mu m}$. In order to validate the model, we made a qualitative test to demonstrate the relative isotropization of the flux. We also conducted quantitative assessments by using a bayesian inversion method in order to estimate the parameters (e.g. sampl...

  14. Neuroinflammatory targets and treatments for epilepsy validated in experimental models.

    Science.gov (United States)

    Aronica, Eleonora; Bauer, Sebastian; Bozzi, Yuri; Caleo, Matteo; Dingledine, Raymond; Gorter, Jan A; Henshall, David C; Kaufer, Daniela; Koh, Sookyong; Löscher, Wolfgang; Louboutin, Jean-Pierre; Mishto, Michele; Norwood, Braxton A; Palma, Eleonora; Poulter, Michael O; Terrone, Gaetano; Vezzani, Annamaria; Kaminski, Rafal M

    2017-07-01

    A large body of evidence that has accumulated over the past decade strongly supports the role of inflammation in the pathophysiology of human epilepsy. Specific inflammatory molecules and pathways have been identified that influence various pathologic outcomes in different experimental models of epilepsy. Most importantly, the same inflammatory pathways have also been found in surgically resected brain tissue from patients with treatment-resistant epilepsy. New antiseizure therapies may be derived from these novel potential targets. An essential and crucial question is whether targeting these molecules and pathways may result in anti-ictogenesis, antiepileptogenesis, and/or disease-modification effects. Therefore, preclinical testing in models mimicking relevant aspects of epileptogenesis is needed to guide integrated experimental and clinical trial designs. We discuss the most recent preclinical proof-of-concept studies validating a number of therapeutic approaches against inflammatory mechanisms in animal models that could represent novel avenues for drug development in epilepsy. Finally, we suggest future directions to accelerate preclinical to clinical translation of these recent discoveries. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  15. Development and validation of mathematical modelling for pressurised combustion

    Energy Technology Data Exchange (ETDEWEB)

    Richter, S.; Knaus, H.; Risio, B.; Schnell, U.; Hein, K.R.G. [University of Stuttgart, Stuttgart (Germany). Inst. fuer Verfahrenstechnik und Dampfkesselwesen

    1998-12-31

    The advanced 3D-coal combustion code AIOLOS for quasi-stationary turbulent reacting flows is based on a conservative finite-volume procedure. Equations for the conservation of mass, momentum and scalar quantities are solved. In order to deal with pressurized combustion chambers which are usually of cylindrical shape, a first task in the frame of the project consisted in the extension of the code towards cylindrical co-ordinates, since the basic version of AIOLOS was only suitable for cartesian grids. Furthermore, the domain decomposition method was extended to the new co-ordinate system. Its advantage consists in the possibility to introduce refined sub-grids, providing a better resolution of regions where high gradients occur (e.g. high velocity and temperature gradients near the burners). The accuracy of the code was proven by means of a small-scale test case. The results obtained with AIOLOS were compared with the predictions of the commercial CFD-code FLOW3D and validated against the velocity and temperature distributions measured at the test facility. The work during the second period focused mainly on the extension of the reaction model, as well as on the modelling of the optical properties of the flue gas. A modified submodel for char burnout was developed, considering the influence of pressure on diffusion mechanisms and on the chemical reaction at the char particle. The goal during the third project period was to improve the numerical description of turbulence effects and of the radiative heat transfer, in order to obtain an adequate modelling of the complex processes in pressurized coal combustion furnaces. Therefore, a differential Reynolds stress turbulence model (RSM) and a Discrete-Ordinates radiation model were implemented, respectively. 13 refs., 13 figs., 1 tab.

  16. Fitting the Two-Higgs-Doublet model of type II

    CERN Document Server

    Eberhardt, Otto

    2014-01-01

    We present the current status of the Two-Higgs-Doublet model of type II. Taking into account all available relevant information, we exclude at $95$% CL sizeable deviations of the so-called alignment limit, in which all couplings of the light CP-even Higgs boson $h$ are Standard-Model-like. While we can set a lower limit of $240$ GeV on the mass of the pseudoscalar Higgs boson at $95$% CL, the mass of the heavy CP-even Higgs boson $H$ can be even lighter than $200$ GeV. The strong constraints on the model parameters also set limits on the triple Higgs couplings: the $hhh$ coupling in the Two-Higgs-Doublet model of type II cannot be larger than in the Standard Model, while the $hhH$ coupling can maximally be $2.5$ times the size of the Standard Model $hhh$ coupling, assuming an $H$ mass below $1$ TeV. The selection of benchmark scenarios which maximize specific effects within the allowed regions for further collider studies is illustrated for the $H$ branching fraction to fermions and gauge bosons. As an exampl...

  17. Comparison and validation of combined GRACE/GOCE models of the Earth's gravity field

    Science.gov (United States)

    Hashemi Farahani, H.; Ditmar, P.

    2012-04-01

    Accurate global models of the Earth's gravity field are needed in various applications: in geodesy - to facilitate the production of a unified global height system; in oceanography - as a source of information about the reference equipotential surface (geoid); in geophysics - to draw conclusions about the structure and composition of the Earth's interiors, etc. A global and (nearly) homogeneous set of gravimetric measurements is being provided by the dedicated satellite mission Gravity Field and Steady-State Ocean Circulation Explorer (GOCE). In particular, Satellite Gravity Gradiometry (SGG) data acquired by this mission are characterized by an unprecedented accuracy/resolution: according to the mission objectives, they must ensure global geoid modeling with an accuracy of 1 - 2 cm at the spatial scale of 100 km (spherical harmonic degree 200). A number of new models of the Earth's gravity field have been compiled on the basis of GOCE data in the course of the last 1 - 2 years. The best of them take into account also the data from the satellite gravimetry mission Gravity Recovery And Climate Experiment (GRACE), which offers an unbeatable accuracy in the range of relatively low degrees. Such combined models contain state-of-the-art information about the Earth's gravity field up to degree 200 - 250. In the present study, we compare and validate such models, including GOCO02, EIGEN-6S, and a model compiled in-house. In addition, the EGM2008 model produced in the pre-GOCE era is considered as a reference. The validation is based on the ability of the models to: (i) predict GRACE K-Band Ranging (KBR) and GOCE SGG data (not used in the production of the models under consideration), and (ii) synthesize a mean dynamic topography model, which is compared with the CNES-CLS09 model derived from in situ oceanographic data. The results of the analysis demonstrate that the GOCE SGG data lead not only to significant improvements over continental areas with a poor coverage with

  18. Development and validation of a cisplatin dose-ototoxicity model.

    Science.gov (United States)

    Dille, Marilyn F; Wilmington, Debra; McMillan, Garnett P; Helt, Wendy; Fausti, Stephen A; Konrad-Martin, Dawn

    2012-01-01

    Cisplatin is effective in the treatment of several cancers but is a known ototoxin resulting in shifts to hearing sensitivity in up to 50-60% of patients. Cisplatin-induced hearing shifts tend to occur first within an octave of a patient's high frequency hearing limit, termed the sensitive range for ototoxicity (SRO), and progress to lower frequencies. While it is currently not possible to know which patients will experience ototoxicity without testing their hearing directly, monitoring the SRO provides an early indication of damage. A tool to help forecast susceptibility to ototoxic-induced changes in the SRO in advance of each chemotherapy treatment visit may prove useful for ototoxicity monitoring efforts, patient counseling, and therapeutic planning. This project was designed to (1) establish pretreatment risk curves that quantify the probability that a new patient will suffer hearing loss within the SRO during treatment with cisplatin and (2) evaluate the accuracy of these predictions in an independent sample of Veterans receiving cisplatin for the treatment of cancer. Two study samples were used. The Developmental sample contained 23 subjects while the Validation sample consisted of 12 subjects. Risk curve predictions for SRO threshold shifts following cisplatin exposure were developed using a Developmental sample comprised of data from a total of 155 treatment visits obtained in 45 ears of 23 Veterans. Pure-tone thresholds were obtained within each subject's SRO at each treatment visit and compared with baseline measures. The risk of incurring an SRO shift was statistically modeled as a function of factors related to chemotherapy treatment (cisplatin dose, radiation treatment, doublet medication) and patient status (age, pre-exposure hearing, cancer location and stage). The model was reduced so that only statistically significant variables were included. Receiver-operating characteristic (ROC) curve analyses were then used to determine the accuracy of the

  19. Validation model for Raman based skin carotenoid detection.

    Science.gov (United States)

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo.

  20. Validation of conducting wall models using magnetic measurements

    Science.gov (United States)

    Hanson, J. M.; Bialek, J.; Turco, F.; King, J.; Navratil, G. A.; Strait, E. J.; Turnbull, A.

    2016-10-01

    The impact of conducting wall eddy currents on perturbed magnetic field measurements is a key issue for understanding the measurement and control of long-wavelength MHD stability in tokamak devices. As plasma response models have growth in sophistication, the need to understand and resolve small changes in these measurements has become more important, motivating increased fidelity in simulations of externally applied fields and the wall eddy current response. In this manuscript, we describe thorough validation studies of the wall models in the mars-f and valen stability codes, using coil-sensor vacuum coupling measurements from the DIII-D tokamak (Luxon et al 2005 Fusion Sci. Technol. 48 807). The valen formulation treats conducting structures with arbitrary three-dimensional geometries, while mars-f uses an axisymmetric wall model and a spectral decomposition of the problem geometry with a fixed toroidal harmonic n. The vacuum coupling measurements have a strong sensitivity to wall eddy currents induced by time-changing coil currents, owing to the close proximities of both the sensors and coils to the wall. Measurements from individual coil and sensor channels are directly compared with valen predictions. It is found that straightforward improvements to the valen model, such as refining the wall mesh and simulating the vertical extent of the DIII-D poloidal field sensors, lead to good agreement with the experimental measurements. In addition, couplings to multi-coil, n  =  1 toroidal mode perturbations are calculated from the measurements and compared with predictions from both codes. The toroidal mode comparisons favor the fully three-dimensional simulation approach, likely because this approach naturally treats n  >  1 sidebands generated by the coils and wall eddy currents, as well as the n  =  1 fundamental.

  1. Structure Modeling and Validation applied to Source Physics Experiments (SPEs)

    Science.gov (United States)

    Larmat, C. S.; Rowe, C. A.; Patton, H. J.

    2012-12-01

    The U. S. Department of Energy's Source Physics Experiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

  2. The Neurological Outcome Scale for Traumatic Brain Injury (NOS-TBI): II. Reliability and convergent validity.

    Science.gov (United States)

    McCauley, Stephen R; Wilde, Elisabeth A; Kelly, Tara M; Weyand, Annie M; Yallampalli, Ragini; Waldron, Eric J; Pedroza, Claudia; Schnelle, Kathleen P; Boake, Corwin; Levin, Harvey S; Moretti, Paolo

    2010-06-01

    A standardized measure of neurological dysfunction specifically designed for TBI currently does not exist and the lack of assessment of this domain represents a substantial gap. To address this, the Neurological Outcome Scale for Traumatic Brain Injury (NOS-TBI) was developed for TBI outcomes research through the addition to and modification of items specifically relevant to patients with TBI, based on the National Institutes of Health Stroke Scale. In a sample of 50 participants (mean age = 33.3 years, SD = 12.9) TBI, internal consistency of the NOS-TBI was high (Cronbach's alpha = 0.942). Test-retest reliability also was high (rho = 0.97, p TBI total score was excellent (W = 0.995). Convergent validity was demonstrated through significant Spearman rank-order correlations between the NOS-TBI and the concurrently administered Disability Rating Scale (rho = 0.75, p TBI is a reliable and valid measure of neurological functioning in patients with moderate to severe TBI.

  3. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  4. Parental modelling of eating behaviours: observational validation of the Parental Modelling of Eating Behaviours scale (PARM).

    Science.gov (United States)

    Palfreyman, Zoe; Haycraft, Emma; Meyer, Caroline

    2015-03-01

    Parents are important role models for their children's eating behaviours. This study aimed to further validate the recently developed Parental Modelling of Eating Behaviours Scale (PARM) by examining the relationships between maternal self-reports on the PARM with the modelling practices exhibited by these mothers during three family mealtime observations. Relationships between observed maternal modelling and maternal reports of children's eating behaviours were also explored. Seventeen mothers with children aged between 2 and 6 years were video recorded at home on three separate occasions whilst eating a meal with their child. Mothers also completed the PARM, the Children's Eating Behaviour Questionnaire and provided demographic information about themselves and their child. Findings provided validation for all three PARM subscales, which were positively associated with their observed counterparts on the observational coding scheme (PARM-O). The results also indicate that habituation to observations did not change the feeding behaviours displayed by mothers. In addition, observed maternal modelling was significantly related to children's food responsiveness (i.e., their interest in and desire for foods), enjoyment of food, and food fussiness. This study makes three important contributions to the literature. It provides construct validation for the PARM measure and provides further observational support for maternal modelling being related to lower levels of food fussiness and higher levels of food enjoyment in their children. These findings also suggest that maternal feeding behaviours remain consistent across repeated observations of family mealtimes, providing validation for previous research which has used single observations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Checklist for the qualitative evaluation of clinical studies with particular focus on external validity and model validity

    Directory of Open Access Journals (Sweden)

    Vollmar Horst C

    2006-12-01

    Full Text Available Abstract Background It is often stated that external validity is not sufficiently considered in the assessment of clinical studies. Although tools for its evaluation have been established, there is a lack of awareness of their significance and application. In this article, a comprehensive checklist is presented addressing these relevant criteria. Methods The checklist was developed by listing the most commonly used assessment criteria for clinical studies. Additionally, specific lists for individual applications were included. The categories of biases of internal validity (selection, performance, attrition and detection bias correspond to structural, treatment-related and observational differences between the test and control groups. Analogously, we have extended these categories to address external validity and model validity, regarding similarity between the study population/conditions and the general population/conditions related to structure, treatment and observation. Results A checklist is presented, in which the evaluation criteria concerning external validity and model validity are systemised and transformed into a questionnaire format. Conclusion The checklist presented in this article can be applied to both planning and evaluating of clinical studies. We encourage the prospective user to modify the checklists according to the respective application and research question. The higher expenditure needed for the evaluation of clinical studies in systematic reviews is justified, particularly in the light of the influential nature of their conclusions on therapeutic decisions and the creation of clinical guidelines.

  6. Prognostic models for locally advanced cervical cancer: external validation of the published models.

    Science.gov (United States)

    Lora, David; Gómez de la Cámara, Agustín; Fernández, Sara Pedraza; Enríquez de Salamanca, Rafael; Gómez, José Fermín Pérez Regadera

    2017-09-01

    To externally validate the prognostic models for predicting the time-dependent outcome in patients with locally advanced cervical cancer (LACC) who were treated with concurrent chemoradiotherapy in an independent cohort. A historical cohort of 297 women with LACC who were treated with radical concurrent chemoradiotherapy from 1999 to 2014 at the 12 de Octubre University Hospital (H12O), Madrid, Spain. The external validity of prognostic models was quantified regarding discrimination, calibration, measures of overall performance, and decision curve analyses. The review identified 8 studies containing 13 prognostic models. Different (International Federation of Gynecology and Obstetrics [FIGO] stages, parametrium involvement, hydronephrosis, location of positive nodes, and race) but related cohorts with validation cohort (5-year overall survival [OS]=70%; 5-year disease-free survival [DFS]=64%; average age of 50; and over 79% squamous cell) were evaluated. The following models exhibited good external validity in terms of discrimination and calibration but limited clinical utility: the OS model at 3 year from Kidd et al.'s study (area under the receiver operating characteristic curve [AUROC]=0.69; threshold of clinical utility [TCU] between 36% and 50%), the models of DFS at 1 year from Kidd et al.'s study (AUROC=0.64; TCU between 24% and 32%) and 2 years from Rose et al.'s study (AUROC=0.70; TCU between 19% and 58%) and the distant recurrence model at 5 years from Kang et al.'s study (AUROC=0.67; TCU between 12% and 36%). The external validation revealed the statistical and clinical usefulness of 4 prognostic models published in the literature.

  7. Validation of population-based disease simulation models: a review of concepts and methods

    Directory of Open Access Journals (Sweden)

    Sharif Behnam

    2010-11-01

    Full Text Available Abstract Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1 the process of model development, 2 the performance of a model, and 3 the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction. More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility.

  8. Development and Validation of a Needs Assessment Model Using Stakeholders Involved in a University Program.

    Science.gov (United States)

    Labrecque, Monique

    1999-01-01

    Developed a needs-assessment model and validated the model with five groups of stakeholders connected with an undergraduate university nursing program in Canada. Used focus groups, questionnaires, a hermeneutic approach, and the magnitude-estimation scaling model to validate the model. Results show that respondents must define need to clarify the…

  9. Integral Reactor Containment Condensation Model and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qiao [Oregon State Univ., Corvallis, OR (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States)

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  10. Alaska North Slope Tundra Travel Model and Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  11. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  12. Validating global hydrological models by ground and space gravimetry

    Institute of Scientific and Technical Information of China (English)

    ZHOU JiangCun; SUN HePing; XU JianQiao

    2009-01-01

    The long-term continuous gravity observations obtained by the superconducting gravimeters (SG) at seven globally-distributed stations are comprehensively analyzed. After removing the signals related to the Earth's tides and variations in the Earth's rotation, the gravity residuals are used to describe the seasonal fluctuations in gravity field. Meanwhile, the gravity changes due to the air pressure loading are theoretically modeled from the measurements of the local air pressure, and those due to land water and nontidal ocean loading are also calculated according to the corresponding numerical models. The numerical results show that the gravity changes due to both the air pressure and land water loading are as large as 100×10-9 m s-2 in magnitude, and about 10×10-9 m s-2 for those due to the nontidal ocean loading in the coastal area. On the other hand, the monthly-averaged gravity variations over the area surrounding the stations are derived from the spherical harmonic coefficients of the GRACE-recovered gravity fields, by using Gaussian smoothing technique in which the radius is set to be 600 km. Compared the land water induced gravity variations, the SG observations after removal of tides, polar motion effects, air pressure and nontidal ocean loading effects and the GRACE-derived gravity variations with each other, it is inferred that both the ground- and space-based gravity observations can effectively detect the seasonal gravity variations with a magnitude of 100×10-9 m s-2 induced by the land water loading. This implies that high precision gravimetry is an effective technique to validate the reliabilities of the hydrological models.

  13. Active control strategy on a catenary-pantograph validated model

    Science.gov (United States)

    Sanchez-Rebollo, C.; Jimenez-Octavio, J. R.; Carnicero, A.

    2013-04-01

    Dynamic simulation methods have become essential in the design process and control of the catenary-pantograph system, overall since high-speed trains and interoperability criteria are getting very trendy. This paper presents an original hardware-in-the-loop (HIL) strategy aimed at integrating a multicriteria active control within the catenary-pantograph dynamic interaction. The relevance of HIL control systems applied in the frame of the pantograph is undoubtedly increasing due to the recent and more demanding requirements for high-speed railway systems. Since the loss of contact between the catenary and the pantograph leads to arcing and electrical wear, and too high contact forces cause mechanical wear of both the catenary wires and the strips of the pantograph, not only prescribed but also economic and performance criteria ratify such a relevance. Different configurations of the proportional-integral-derivative (PID) controller are proposed and applied to two different plant systems. Since this paper is mainly focused on the control strategy, both plant systems are simulation models though the methodology is suitable for a laboratory bench. The strategy of control involves a multicriteria optimisation of the contact force and the consumption of the energy supplied by the control force, a genetic algorithm has been applied for this purpose. Thus, the PID controller is fitted according to these conflicting objectives and tested within a nonlinear lumped model and a nonlinear finite element model, being the last one validated against the European Standard EN 50318. Finally, certain tests have been accomplished in order to analyse the robustness of the control strategy. Particularly, the relevance or the plant simulation, the running speed and the instrumentation time delay are studied in this paper.

  14. Tsunami-HySEA model validation for tsunami current predictions

    Science.gov (United States)

    Macías, Jorge; Castro, Manuel J.; González-Vida, José Manuel; Ortega, Sergio

    2016-04-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focussed on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. Acknowledgements: This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069) and the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and Universidad de Málaga, Campus de Excelencia Andalucía TECH. The GPU and multi-GPU computations were performed at the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Malaga.

  15. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    Science.gov (United States)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo

  16. Mitigation of turbidity currents in reservoirs with passive retention systems: validation of CFD modeling

    Science.gov (United States)

    Ferreira, E.; Alves, E.; Ferreira, R. M. L.

    2012-04-01

    Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ɛ models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.

  17. A computational fluid dynamics model for wind simulation:model implementation and experimental validation

    Institute of Scientific and Technical Information of China (English)

    Zhuo-dong ZHANG; Ralf WIELAND; Matthias REICHE; Roger FUNK; Carsten HOFFMANN; Yong LI; Michael SOMMER

    2012-01-01

    To provide physically based wind modelling for wind erosion research at regional scale,a 3D computational fluid dynamics (CFD) wind model was developed.The model was programmed in C language based on the Navier-Stokes equations,and it is freely available as open source.Integrated with the spatial analysis and modelling tool (SAMT),the wind model has convenient input preparation and powerful output visualization.To validate the wind model,a series of experiments was conducted in a wind tunnel.A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes.A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field.Results show that measured and simulated wind fields have high correlations,and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field.These results show the high reliability of the wind model.A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region,China) was applied to the model,and a 3D wind field has been successfully generated.The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.

  18. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  19. On the verification and validation of detonation models

    Science.gov (United States)

    Quirk, James

    2013-06-01

    This talk will consider the verification and validation of detonation models, such as Wescott-Stewart-Davis (Journal of Applied Physics. 2005), from the perspective of the American Institute of Aeronautics and Astronautics policy on numerical accuracy (AIAA J. Vol. 36, No. 1, 1998). A key aspect of the policy is that accepted documentation procedures must be used for journal articles with the aim of allowing the reported work to be reproduced by the interested reader. With the rise of electronic documents, since the policy was formulated, it is now possible to satisfy this mandate in its strictest sense: that is, it is now possible to run a comptuational verification study directly in a PDF, thereby allowing a technical author to report numerical subtleties that traditionally have been ignored. The motivation for this document-centric approach is discussed elsewhere (Quirk2003, Adaptive Mesh Refinement Theory and Practice, Springer), leaving the talk to concentrate on specific detonation examples that should be of broad interest to the shock-compression community.

  20. Validation of body composition models for high school wrestlers.

    Science.gov (United States)

    Williford, H N; Smith, J F; Mansfield, E R; Conerly, M D; Bishop, P A

    1986-04-01

    This study investigates the utility of two equations for predicting minimum wrestling weight and three equations for predicting body density for the population of high school wrestlers. A sample of 54 wrestlers was assessed for body density by underwater weighing, residual volume by helium dilution, and selected anthropometric measures. The differences between observed and predicted responses were analyzed for the five models. Four statistical tests were used to validate the equations, including tests for the mean of differences, proportion of positive differences, equality of standard errors from regression, and equivalence of regression coefficients between original and second sample data. The Michael and Katch equation and two Forsyth and Sinning equations (FS1 and FS21) for body density did not predict as well as expected. The Michael and Katch equation tends to overpredict body density while FS1 underpredicts. The FS2 equation, consisting of a constant adjustment to FS1, predicts well near the mean but not at the ends of the sample range. The two Tcheng and Tipton equations produce estimates which slightly but consistently overpredict minimum wrestling weight, the long form equation by 2.5 pounds and the short form by 3.8 pounds. As a result the proportion of positive differences is less than would be expected. But based on the tests for the standard errors and regression coefficients, the evidence does not uniformly reject these two equations.

  1. Non-residential water demand model validated with extensive measurements

    Directory of Open Access Journals (Sweden)

    E. J. Pieterse-Quirijns

    2012-08-01

    Full Text Available Existing guidelines related to the water demand of non-residential buildings are outdated and do not cover hot water demand for the appropriate selection of hot water devices. Moreover, they generally overestimate peak demand values required for the design of an efficient and reliable water system. Recently, a procedure was developed based on the end-use model SIMDEUM® to derive design rules for peak demand values of both cold and hot water during various time steps for several types and sizes of non-residential buildings, i.e. offices, hotels and nursing homes. In this paper, the design rules are validated with measurements of cold and hot water patterns on a per second base. The good correlation between the simulated patterns and the measured patterns indicates that the basis of the design rules, the SIMDEUM simulated standardised buildings, is solid. Moreover, the SIMDEUM based rules give a better prediction of the measured peak values for cold water flow than the existing guidelines. Furthermore, the new design rules can predict hot water use well. In this paper it is illustrated that the new design rules lead to reliable and improved designs of building installations and water heater capacity, resulting in more hygienic and economical installations.

  2. Geoid model computation and validation over Alaska/Yukon

    Science.gov (United States)

    Li, X.; Huang, J.; Roman, D. R.; Wang, Y.; Veronneau, M.

    2012-12-01

    The Alaska and Yukon area consists of very complex and dynamic geology. It is featured by the two highest mountains in North America, Mount McKinely (20,320ft) in Alaska, USA and Mount Logan (19,541ft) in Yukon, Canada, along with the Alaska trench along the plate boundaries. On the one hand this complex geology gives rise to large horizontal geoid gradients across this area. On the other hand geoid time variation is much stronger than most of the other areas in the world due to tectonic movement, the post glacial rebound and ice melting effects in this region. This type of geology poses great challenges for the determination of North American geoid over this area, which demands proper gravity data coverage in both space and time on both the Alaska and Yukon sides. However, the coverage of the local gravity data is inhomogenous in this area. The terrestrial gravity is sparse in Alaska, and spans a century in time. In contrast, the terrestrial gravity is relatively well-distributed in Yukon but with data gaps. In this paper, various new satellite models along with the newly acquired airborne data will be incorporated to augment the middle-to-long wavelength geoid components. Initial tests show clear geoid improvements at the local GPS benchmarks in the Yukon area after crustal motion is accounted for. Similar approaches will be employed on the Alaska side for a better validation to determine a continuous vertical datum across US and Canada.

  3. Dimensional and hierarchical models of depression using the Beck Depression Inventory-II in an Arab college student sample

    Directory of Open Access Journals (Sweden)

    Ohaeri Jude U

    2010-07-01

    Full Text Available Abstract Background An understanding of depressive symptomatology from the perspective of confirmatory factor analysis (CFA could facilitate valid and interpretable comparisons across cultures. The objectives of the study were: (i using the responses of a sample of Arab college students to the Beck Depression Inventory (BDI-II in CFA, to compare the "goodness of fit" indices of the original dimensional three-and two-factor first-order models, and their modifications, with the corresponding hierarchical models (i.e., higher - order and bifactor models; (ii to assess the psychometric characteristics of the BDI-II, including convergent/discriminant validity with the Hopkins Symptom Checklist (HSCL-25. Method Participants (N = 624 were Kuwaiti national college students, who completed the questionnaires in class. CFA was done by AMOS, version 16. Eleven models were compared using eight "fit" indices. Results In CFA, all the models met most "fit" criteria. While the higher-order model did not provide improved fit over the dimensional first - order factor models, the bifactor model (BFM had the best fit indices (CMNI/DF = 1.73; GFI = 0.96; RMSEA = 0.034. All regression weights of the dimensional models were significantly different from zero (P Conclusion The broadly adequate fit of the various models indicates that they have some merit and implies that the relationship between the domains of depression probably contains hierarchical and dimensional elements. The bifactor model is emerging as the best way to account for the clinical heterogeneity of depression. The psychometric characteristics of the BDI-II lend support to our CFA results.

  4. Integral Reactor Containment Condensation Model and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qiao [Oregon State Univ., Corvallis, OR (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States)

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  5. Importance of Sea Ice for Validating Global Climate Models

    Science.gov (United States)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically global climate.

  6. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    Energy Technology Data Exchange (ETDEWEB)

    Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng [Department of Chemical and Petroleum Engineering, University of Uyo, Uyo (Nigeria); Aluyor, E.; Audu, T. [Department of Chemical Engineering, University of Uyo, BeninCity, BeninCity (Nigeria)

    2015-03-30

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  7. Experimental validation of Swy-2 clay standard's PHREEQC model

    Science.gov (United States)

    Szabó, Zsuzsanna; Hegyfalvi, Csaba; Freiler, Ágnes; Udvardi, Beatrix; Kónya, Péter; Székely, Edit; Falus, György

    2017-04-01

    One of the challenges of the present century is to limit the greenhouse gas emissions for the mitigation of climate change which is possible for example by a transitional technology, CCS (Carbon Capture and Storage) and, among others, by the increase of nuclear proportion in the energy mix. Clay minerals are considered to be responsible for the low permeability and sealing capacity of caprocks sealing off stored CO2 and they are also the main constituents of bentonite in high level radioactive waste disposal facilities. The understanding of clay behaviour in these deep geological environments is possible through laboratory batch experiments of well-known standards and coupled geochemical models. Such experimentally validated models are scarce even though they allow deriving more precise long-term predictions of mineral reactions and rock and bentonite degradation underground and, therefore, ensuring the safety of the above technologies and increase their public acceptance. This ongoing work aims to create a kinetic geochemical model of Na-montmorillonite standard Swy-2 in the widely used PHREEQC code, supported by solution and mineral composition results from batch experiments. Several four days experiments have been carried out in 1:35 rock:water ratio at atmospheric conditions, and with inert and CO2 supercritical phase at 100 bar and 80 ⁰C relevant for the potential Hungarian CO2 reservoir complex. Solution samples have been taken during and after experiments and their compositions were measured by ICP-OES. The treated solid phase has been analysed by XRD and ATR-FTIR and compared to in-parallel measured references (dried Swy-2). Kinetic geochemical modelling of the experimental conditions has been performed by PHREEQC version 3 using equations and kinetic rate parameters from the USGS report of Palandri and Kharaka (2004). The visualization of experimental and numerous modelling results has been automatized by R. Experiments and models show very fast

  8. Real-time infrared signature model validation for hardware-in-the-loop simulations

    Science.gov (United States)

    Sanders, Jeffrey S.; Peters, Trina S.

    1997-07-01

    Techniques and tools for validation of real-time infrared target signature models are presented. The model validation techniques presented in this paper were developed for hardware-in-the-loop (HWIL) simulations at the U.S. Army Missile Command's Research, Development, and Engineering Center. Real-time target model validation is a required deliverable to the customer of a HWIL simulation facility and is a critical part of ensuring the fidelity of a HWIL simulation. There are two levels of real-time target model validation. The first level is comparison of the target model to some baseline or measured data which answers the question `are the simulation inputs correct?'. The second level of validation is a simulation validation which answers the question `for a given target model input is the simulation hardware and software generating the correct output?'. This paper deals primarily with the first level of target model validation. IR target signature models have often been validated by subjective visual inspection or by objective, but limited, statistical comparisons. Subjective methods can be very satisfying to the simulation developer but offer little comfort to the simulation customer since subjective methods cannot be documented. Generic statistical methods offer a level of documentation, yet are often not robust enough to fully test the fidelity of an IR signature. Advances in infrared seeker and sensor technology have led to the necessity of system specific target model validation. For any HWIL simulation it must be demonstrated that the sensor responds to the real-time signature model in a manner which is functionally equivalent to the sensor's response to a baseline model. Depending on the application, a baseline method can be measured IR imagery or the output of a validated IR signature prediction code. Tools are described that generate validation data for HWIL simulations at MICOM and example real-time model validations are presented.

  9. Verification and Validation of the Coastal Modeling System. Report 3: CMS-Flow: Hydrodynamics

    Science.gov (United States)

    2011-12-01

    ER D C/ CH L TR -1 1- 10 Verification and Validation of the Coastal Modeling System Report 3, CMS -Flow: Hydrodynamics Co as ta l a nd...11-10 December 2011 Verification and Validation of the Coastal Modeling System Report 3, CMS -Flow: Hydrodynamics Alejandro Sánchez, Weiming Wu...of four reports toward the Verification and Validation (V&V) of the Coastal Modeling System ( CMS ). The details of the V&V study specific to the

  10. New pediatric vision screener, part II: electronics, software, signal processing and validation.

    Science.gov (United States)

    Gramatikov, Boris I; Irsch, Kristina; Wu, Yi-Kai; Guyton, David L

    2016-02-04

    We have developed an improved pediatric vision screener (PVS) that can reliably detect central fixation, eye alignment and focus. The instrument identifies risk factors for amblyopia, namely eye misalignment and defocus. The device uses the birefringence of the human fovea (the most sensitive part of the retina). The optics have been reported in more detail previously. The present article focuses on the electronics and the analysis algorithms used. The objective of this study was to optimize the analog design, data acquisition, noise suppression techniques, the classification algorithms and the decision making thresholds, as well as to validate the performance of the research instrument on an initial group of young test subjects-18 patients with known vision abnormalities (eight male and 10 female), ages 4-25 (only one above 18) and 19 controls with proven lack of vision issues. Four statistical methods were used to derive decision making thresholds that would best separate patients with abnormalities from controls. Sensitivity and specificity were calculated for each method, and the most suitable one was selected. Both the central fixation and the focus detection criteria worked robustly and allowed reliable separation between normal test subjects and symptomatic subjects. The sensitivity of the instrument was 100 % for both central fixation and focus detection. The specificity was 100 % for central fixation and 89.5 % for focus detection. The overall sensitivity was 100 % and the overall specificity was 94.7 %. Despite the relatively small initial sample size, we believe that the PVS instrument design, the analysis methods employed, and the device as a whole, will prove valuable for mass screening of children.

  11. Counterparty risk analysis using Merton's structural model under Solvency II

    Directory of Open Access Journals (Sweden)

    Luis Otero González

    2014-12-01

    Full Text Available The new solvency regulation in the European insurance sector, denominated Solvency II, will completely transform the system of capital requirements estimation. Recently it has introduced the latest quantitative impact study (QIS5, which provides the calculation method in the internal model for the determination of capital requirements. The aim of this paper is to analyze the adequacy of the calibration of the counterparty credit risk by the models proposed in recent quantitative impact reports (fourth and fifth. To do this we compare capital requirements obtained by the two alternatives, against which that results from applying a simulation model based on the structural approach. The results shows that the use of probabilities based on the methodology of Merton, which can be used in an internal model, compared to those based on ratings (standard model result in substantially higher capital requirements. In addition, the model proposed in QIS4 based on Vasicek distribution is not appropriate when the number of counterparties is reduced, a common situation in the European insurance sector. Moreover, the new proposal (QIS5 or Ter Berg model is more versatile and suitable than its predecessor but requires further research in order to improve the calibration hypothesis and, thus, to better approximate estimates to the risk actually assumed.

  12. ADVISHE: A new tool to report validation of health-economic decision models

    NARCIS (Netherlands)

    Vemer, P.; Corro Ramos, I.; Van Voorn, G.; Al, M.J.; Feenstra, T.L.

    2014-01-01

    Background: Modelers and reimbursement decision makers could both profit from a more systematic reporting of the efforts to validate health-economic (HE) models. Objectives: Development of a tool to systematically report validation efforts of HE decision models and their outcomes. Methods: A gross

  13. Distributed hydrological modelling of the Senegal river basin - model construction and validation

    DEFF Research Database (Denmark)

    Andersen, J.; Refsgaard, J.C.; Jensen, Karsten Høgh

    2001-01-01

    A modified version of the physically-based distributed MIKE SHE model code was applied to the 375,000 km(2) Senegal River Basin. On the basis of conventional data from meteorological stations and readily accessible databases on topography, soil types, vegetation type, etc. three models....... Further calibration against additional discharge stations improved the performance levels of the validation for the different subcatchments. Although there may be good reasons to believe that the model operating on a model grid of 4 x 4 km(2) to a large extent reflects held conditions at a scale smaller...

  14. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    Science.gov (United States)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  15. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.

  16. Atomic Data and Spectral Models for FeII

    CERN Document Server

    Bautista, Manuel A; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R

    2015-01-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe II. The data sets involve 52 levels from the $3d\\,^7$, $3d\\,^64s$, and $3d\\,^54s^2$ configurations. Computations of $A$-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree--Fock, Thomas--Fermi--Dirac potential, and Dirac--Fock methods; while the $R$-matrix plus intermediate coupling frame transformation, Breit--Pauli $R$-matrix and Dirac $R$-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe II] emission in the IR and optical regions as well as in the UV Fe...

  17. Hydrodynamical models of Type II-Plateau Supernovae

    CERN Document Server

    Bersten, Melina C; Hamuy, Mario

    2011-01-01

    We present bolometric light curves of Type II-plateau supernovae (SNe II-P) obtained using a newly developed, one-dimensional Lagrangian hydrodynamic code with flux-limited radiation diffusion. Using our code we calculate the bolometric light curve and photospheric velocities of SN1999em obtaining a remarkably good agreement with observations despite the simplifications used in our calculation. The physical parameters used in our calculation are E=1.25 foe, M= 19 M_\\odot, R= 800 R_\\odot and M_{Ni}=0.056 M_\\odot. We find that an extensive mixing of 56Ni is needed in order to reproduce a plateau as flat as that shown by the observations. We also study the possibility to fit the observations with lower values of the initial mass consistently with upper limits that have been inferred from pre-supernova imaging of SN1999em in connection with stellar evolution models. We cannot find a set of physical parameters that reproduce well the observations for models with pre-supernova mass of \\leq 12 M_\\odot, although mode...

  18. Optical Observations of Meteors Generating Infrasound - II: Weak Shock Theory and Validation

    CERN Document Server

    Silber, Elizabeth A; Krzeminski, Zbigniew

    2014-01-01

    We have recorded a dataset of 24 centimeter-sized meteoroids detected simultaneously by video and infrasound to critically examine the ReVelle [1974] weak shock meteor infrasound model. We find that the effect of gravity wave perturbations to the wind field and updated absorption coefficients in the linear regime on the initial value of the blast radius (R0), which is the strongly non-linear zone of shock propagation near the body and corresponds to energy deposition per path length, is relatively small. Using optical photometry for ground-truth for energy deposition, we find that the ReVelle model accurately predicts blast radii from infrasound periods ({\\tau}), but systematically under-predicts R0 using pressure amplitude. If the weak shock to linear propagation distortion distance is adjusted as part of the modelling process we are able to self-consistently fit a single blast radius value for amplitude and period. In this case, the distortion distance is always much less (usually just a few percent) than t...

  19. Eagle II: A prototype for multi-resolution combat modeling

    Energy Technology Data Exchange (ETDEWEB)

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  20. The ability model of emotional intelligence: Searching for valid measures

    OpenAIRE

    Fiori, M.; Antonakis, J.

    2011-01-01

    Current measures of ability emotional intelligence (EI)--including the well-known Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT)--suffer from several limitations, including low discriminant validity and questionable construct and incremental validity. We show that the MSCEIT is largely predicted by personality dimensions, general intelligence, and demographics having multiple R's with the MSCEIT branches up to .66; for the general EI factor this relation was even stronger (Multiple...

  1. Empirical Validation of Building Simulation Software : Modeling of Double Facades

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  2. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Science.gov (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  3. Models of TCP in high-BDP environments and their experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vardoyan, G. [University of Massachusetts; Rao, Nageswara S [ORNL; Towlsey, D. [University of Massachusetts

    2016-01-01

    In recent years, the computer networking community has seen a steady growth in bandwidth-delay products (BDPs). Several TCP variants were created to combat the shortcomings of legacy TCP when it comes to operation in high-BDP environments. These variants, among which are CUBIC, STCP, and H-TCP, have been extensively studied in some empirical contexts, and some analytical models exist for CUBIC and STCP. However, since these studies have been conducted, BDPs have risen even more, and new bulk data transfer tools have emerged that utilize multiple parallel TCP streams. In view of these new developments, it is imperative to revisit the question: Which congestion control algorithms are best adapted to current networking environments? In order to help resolve this question, we contribute the following: (i) using first principles, we develop a general throughput-prediction framework that takes into account buffer sizes and maximum window constraints; (ii) we validate the models using measurements and achieve low prediction errors; (iii) we note differences in TCP dynamics between two experimental configurations and find one of them to be significantly more deterministic than the other; we also find that CUBIC and H-TCP outperform STCP, especially when multiple streams are used; and (iv) we present preliminary results for modelling multiple TCP streams for CUBIC and STCP.

  4. Transfer matrix modeling and experimental validation of cellular porous material with resonant inclusions.

    Science.gov (United States)

    Doutres, Olivier; Atalla, Noureddine; Osman, Haisam

    2015-06-01

    Porous materials are widely used for improving sound absorption and sound transmission loss of vibrating structures. However, their efficiency is limited to medium and high frequencies of sound. A solution for improving their low frequency behavior while keeping an acceptable thickness is to embed resonant structures such as Helmholtz resonators (HRs). This work investigates the absorption and transmission acoustic performances of a cellular porous material with a two-dimensional periodic arrangement of HR inclusions. A low frequency model of a resonant periodic unit cell based on the parallel transfer matrix method is presented. The model is validated by comparison with impedance tube measurements and simulations based on both the finite element method and a homogenization based model. At the HR resonance frequency (i) the transmission loss is greatly improved and (ii) the sound absorption of the foam can be either decreased or improved depending on the HR tuning frequency and on the thickness and properties of the host foam. Finally, the diffuse field sound absorption and diffuse field sound transmission loss performance of a 2.6 m(2) resonant cellular material are measured. It is shown that the improvements observed at the Helmholtz resonant frequency on a single cell are confirmed at a larger scale.

  5. Quantitative endoscopic imaging elastic scattering spectroscopy: model system/tissue phantom validation

    Science.gov (United States)

    Lindsley, E. H.; Farkas, D. L.

    2008-02-01

    We have designed and built an imaging elastic scattering spectroscopy endoscopic instrument for the purpose of detecting cancer in vivo. As part of our testing and validation of the system, known targets representing potential disease states of interest were constructed using polystyrene beads of known average diameter and TiO II crystals embedded in a two-layer agarose gel. Final construction geometry was verified using a dissection microscope. The phantoms were then imaged using the endoscopic probe at a known incident angle, and the results compared to model predictions. The mathematical model that was used combines classic ray-tracing optics with Mie scattering to predict the images that would be observed by the probe at a given physical distance from a Mie-regime scattering media. This model was used generate the expected observed response for a broad range of parameter values, and these results were then used as a library to fit the observed data from the phantoms. Compared against the theoretical library, the best matching signal correlated well with known phantom material dimensions. These results lead us to believe that imaging elastic scattering can be useful in detection/diagnosis, but further refinement of the device will be necessary to detect the weak signals in a real clinical setting.

  6. Deriving remote sensing reflectance from turbid Case II waters using green-shortwave infrared bands based model

    Science.gov (United States)

    Chen, Jun; Yin, Shoujing; Xiao, Rulin; Xu, Qianxiang; Lin, Changsong

    2014-04-01

    The objectives of this study are to validate the applicability of a shortwave infrared atmospheric correction model (SWIR-based model) in deriving remote sensing reflectance in turbid Case II waters, and to improve that model using a proposed green-shortwave infrared model (GSWIR-based model). In a GSWIR-based model, the aerosol type is determined by a SWIR-based model and the reflectance due to aerosol scattering is calculated using spectral slope technology. In this study, field measurements collected from three independent cruises from two different Case II waters were used to compare models. The results indicate that both SWIR- and GSWIR-based models can be used to derive the remote sensing reflectance at visible wavelengths in turbid Case II waters, but GSWIR-based models are superior to SWIR-based models. Using the GSWIR-based model decreases uncertainty in remote sensing reflectance retrievals in turbid Case II waters by 2.6-12.1%. In addition, GSWIR-based model’s sensitivity to user-supplied parameters was determined using the numerical method, which indicated that the GSWIR-based model is more sensitive to the uncertainty of spectral slope technology than to that of aerosol type retrieval methodology. Due to much lower noise tolerance of GSWIR-based model in the blue and near-infrared regions, the GSWIR-based model performs poorly in determining remote sensing reflectance at these wavelengths, which is consistent with the GSWIR-based model’s accuracy evaluation results.

  7. Validation of the Anhysteretic Magnetization Model for Soft Magnetic Materials with Perpendicular Anisotropy

    Directory of Open Access Journals (Sweden)

    Roman Szewczyk

    2014-07-01

    Full Text Available The paper presents results of validation of the anhysteretic magnetization model for a soft amorphous alloy with significant perpendicular anisotropy. The validation was carried out for the Jiles-Atherton model with Ramesh extension considering anisotropy. Due to the fact that it is difficult to measure anhysteretic magnetization directly, the soft magnetic core with negligible hysteresis was used. The results of validation indicate that the Jiles-Atherton model with Ramesh extension should be corrected to allow accurate modeling of the anhysteretic magnetization. The corrected model may be applied for modeling the cores of current transformers operating in a wide range of measured currents.

  8. Comprehensive and Macrospin-Based Magnetic Tunnel Junction Spin Torque Oscillator Model- Part II: Verilog-A Model Implementation

    Science.gov (United States)

    Chen, Tingsu; Eklund, Anders; Iacocca, Ezio; Rodriguez, Saul; Malm, B. Gunnar; Akerman, Johan; Rusu, Ana

    2015-03-01

    The rapid development of the magnetic tunnel junction (MTJ) spin torque oscillator (STO) technology demands an analytical model to enable building MTJ STO-based circuits and systems so as to evaluate and utilize MTJ STOs in various applications. In Part I of this paper, an analytical model based on the macrospin approximation, has been introduced and verified by comparing it with the measurements of three different MTJ STOs. In Part II, the full Verilog-A implementation of the proposed model is presented. To achieve a reliable model, an approach to reproduce the phase noise generated by the MTJ STO has been proposed and successfully employed. The implemented model yields a time domain signal, which retains the characteristics of operating frequency, linewidth, oscillation amplitude and DC operating point, with respect to the magnetic field and applied DC current. The Verilog-A implementation is verified against the analytical model, providing equivalent device characteristics for the full range of biasing conditions. Furthermore, a system that includes an MTJ STO and CMOS RF circuits is simulated to validate the proposed model for system- and circuit-level designs. The simulation results demonstrate that the proposed model opens the possibility to explore STO technology in a wide range of applications.

  9. Validation Hydrodynamic Models of Three Topological Models of Secondary Facultative Ponds

    Directory of Open Access Journals (Sweden)

    Aponte-Reyes Alxander

    2014-10-01

    Full Text Available A methodology was developed to analyze boundary conditions, the size of the mesh and the turbulence of a mathematical model of CFD, which could explain hydrodynamic behavior on facultative stabilization ponds, FSP, built to pilot scale: conventional pond, CP, baffled pond, BP, and baffled-mesh pond, BMP. Models dispersion studies were performed in field for validation, taking samples into and out of the FSP, the information was used to carry out CFD model simulations of the three topologies. Evaluated mesh sizes ranged from 500,000 to 2,000,000 elements. The boundary condition in Pared surface-free slip showed good qualitative behavior and the turbulence model κ–ε Low Reynolds yielded good results. The biomass contained in LFS generates interference on dispersion studies and should be taken into account in assessing the CFD modeling, the tracer injection times, its concentration at the entrance, the effect of wind on CFD, and the flow models adopted as a basis for modeling are parameters to be taken into account for the CFD model validation and calibration.

  10. Validation of effective momentum and heat flux models for stratification and mixing in a water pool

    Energy Technology Data Exchange (ETDEWEB)

    Hua Li; Villanueva, W.; Kudinov, P. [Royal Institute of Technology (KTH), Div. of Nuclear Power Safety, Stockholm (Sweden)

    2013-06-15

    The pressure suppression pool is the most important feature of the pressure suppression system in a Boiling Water Reactor (BWR) that acts primarily as a passive heat sink during a loss of coolant accident (LOCA) or when the reactor is isolated from the main heat sink. The steam injection into the pool through the blowdown pipes can lead to short term dynamic phenomena and long term thermal transient in the pool. The development of thermal stratification or mixing in the pool is a transient phenomenon that can influence the pool's pressure suppression capacity. Different condensation regimes depending on the pool's bulk temperature and steam flow rates determine the onset of thermal stratification or erosion of stratified layers. Previously, we have proposed to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) and the Effective Momentum Source (EMS) models. The EHS model is used to provide thermal effect of steam injection on the pool, preserving heat and mass balance. The EMS model is used to simulate momentum induced by steam injection in different flow regimes. The EMS model is based on the combination of (i) synthetic jet theory, which predicts effective momentum if amplitude and frequency of flow oscillations in the pipe are given, and (ii) model proposed by Aya and Nariai for prediction of the amplitude and frequency of oscillations at a given pool temperature and steam mass flux. The complete EHS/EMS models only require the steam mass flux, initial pool bulk temperature, and design-specific parameters, to predict thermal stratification and mixing in a pressure suppression pool. In this work we use EHS/EMS models implemented in containment thermal hydraulic code GOTHIC. The PPOOLEX experiments (Lappeenranta University of Technology, Finland) are utilized to (a) quantify errors due to GOTHIC's physical models and numerical schemes, (b) propose necessary improvements in GOTHIC sub-grid scale

  11. A Review of Models and Procedures for Synthetic Validation for Entry-Level Army Jobs

    Science.gov (United States)

    1988-12-01

    ARI Research Note 88-107 A Review of Models and Procedures for Co Synthetic Validation for Entry-LevelM -£.2 Army Jobs C i Jennifer L. Crafts, Philip...of Models and Procecures for Synthetic Validation for Entry-Level Army Jobs 12. PERSONAL AUTHOR(S) Crafts, Jennifer L., Szenas, Fhilip L., Chia, Wel...well as ability. ProJect A Validity Results Campbell (1986) and McHerry, Houigh. Thquam, Hanson, and Ashworth (1987) have conducted extensive

  12. Effects of noise suppression on intelligibility. II: An attempt to validate physical metrics.

    Science.gov (United States)

    Hilkhuysen, Gaston; Gaubitch, Nikolay; Brookes, Mike; Huckvale, Mark

    2014-01-01

    Using the data presented in the accompanying paper [Hilkhuysen et al., J. Acoust. Soc. Am. 131, 531-539 (2012)], the ability of six metrics to predict intelligibility of speech in noise before and after noise suppression was studied. The metrics considered were the Speech Intelligibility Index (SII), the fractional Articulation Index (fAI), the coherence intelligibility index based on the mid-levels in speech (CSIImid), an extension of the Normalized Coherence Metric (NCM+), a part of the speech-based envelope power model (pre-sEPSM), and the Short Term Objective Intelligibility measure (STOI). Three of the measures, SII, CSIImid, and NCM+, overpredicted intelligibility after noise reduction, whereas fAI underpredicted these intelligibilities. The pre-sEPSM metric worked well for speech in babble but failed with car noise. STOI gave the best predictions, but overall the size of intelligibility prediction errors were greater than the change in intelligibility caused by noise suppression. Suggestions for improvements of the metrics are discussed.

  13. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  14. Modeling fluid dynamics on type II quantum computers

    Science.gov (United States)

    Scoville, James; Weeks, David; Yepez, Jeffrey

    2006-03-01

    A quantum algorithm is presented for modeling the time evolution of density and flow fields governed by classical equations, such as the diffusion equation, the nonlinear Burgers equation, and the damped wave equation. The algorithm is intended to run on a type-II quantum computer, a parallel quantum computer consisting of a lattice of small type I quantum computers undergoing unitary evolution and interacting via information interchanges represented by an orthogonal matrices. Information is effectively transferred between adjacent quantum computers over classical communications channels because of controlled state demolition following local quantum mechanical qubit-qubit interactions within each quantum computer. The type-II quantum algorithm presented in this paper describes a methodology for generating quantum logic operations as a generalization of classical operations associated with finite-point group symmetries. The quantum mechanical evolution of multiple qubits within each node is described. Presented is a proof that the parallel quantum system obeys a finite-difference quantum Boltzman equation at the mesoscopic scale, leading in turn to various classical linear and nonlinear effective field theories at the macroscopic scale depending on the details of the local qubit-qubit interactions.

  15. Community-wide model validation studies for systematic assessment of ionosphere-thermosphere models

    Science.gov (United States)

    Shim, Ja Soon; Kuznetsova, Maria; Rastätter, Lutz

    2016-07-01

    As an unbiased agent, the Community Coordinated Modeling Center (CCMC) has been leading community-wide model validation efforts; GEM, CEDAR and GEM-CEDAR Modeling Challenges since 2009. The CEDAR ETI (Electrodynamics Thermosphere Ionosphere) Challenge focused on the ability of ionosphere-thermosphere (IT) models to reproduce basic IT system parameters, such as electron and neutral densities, NmF2, hmF2, and Total Electron Content (TEC). Model-data time series comparisons were performed for a set of selected events with different levels of geomagnetic activity (quiet, moderate, storms). The follow-on CEDAR-GEM Challenge aims to quantify geomagnetic storm impacts on the IT system. On-going studies include quantifying the storm energy input, such as increase in auroral precipitation and Joule heating, and quantifying the storm-time variations of neutral density and TEC. In this paper, we will present lessons learned from the Modeling Challenges led by the CCMC.

  16. Modeling sleep alterations in Parkinson's disease: How close are we to valid translational animal models?

    Science.gov (United States)

    Fifel, Karim; Piggins, Hugh; Deboer, Tom

    2016-02-01

    Parkinson disease is one of the neurodegenerative diseases that benefited the most from the use of non-human models. Consequently, significant advances have been made in the symptomatic treatments of the motor aspects of the disease. Unfortunately, this translational success has been tempered by the recognition of the debilitating aspect of multiple non-motor symptoms of the illness. Alterations of the sleep/wakefulness behavior experienced as insomnia, excessive daytime sleepiness, sleep/wake cycle fragmentation and REM sleep behavior disorder are among the non-motor symptoms that predate motor alterations and inevitably worsen over disease progression. The absence of adequate humanized animal models with the perfect phenocopy of these sleep alterations contribute undoubtedly to the lack of efficient therapies for these non-motor complications. In the context of developing efficient translational therapies, we provide an overview of the strengths and limitations of the various currently available models to replicate sleep alterations of Parkinson's disease. Our investigation reveals that although these models replicate dopaminergic deficiency and related parkinsonism, they rarely display a combination of sleep fragmentation and excessive daytime sleepiness and never REM sleep behavior disorder. In this light, we critically discuss the construct, face and predictive validities of both rodent and non-human primate animals to model the main sleep abnormalities experienced by patients with PD. We conclude by highlighting the need of integrating a network-based perspective in our modeling approach of such complex syndrome in order to celebrate valid translational models.

  17. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    Science.gov (United States)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2017-03-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  18. The black hole challenge in Randall-Sundrum II model

    CERN Document Server

    Pappas, Nikolaos D

    2014-01-01

    Models postulating the existence of additional spacelike dimensions of macroscopic or even infinite size, while viewing our observable universe as merely a 3-brane living in a higher-dimensional bulk were a major breakthrough when proposed some 15 years ago. The most interesting among them both in terms of elegance of the setup and of the richness of the emerging phenomenology is the Randall-Sundrum II model where one infinite extra spacelike dimension is considered with an AdS topology, characterized by the warping effect caused by the presence of a negative cosmological constant in the bulk. A major drawback of this model is that despite numerous efforts no line element has ever been found that could describe a stable, regular, realistic black hole. Finding a smoothly behaved such solution supported by the presence of some more or less conventional fields either in the bulk and/or on the brane is the core of the black hole challenge. After a comprehensive presentation of the details of the model and the ana...

  19. Type II Supernovae: Model Light Curves and Standard Candle Relationships

    Science.gov (United States)

    Kasen, Daniel; Woosley, S. E.

    2009-10-01

    A survey of Type II supernovae explosion models has been carried out to determine how their light curves and spectra vary with their mass, metallicity, and explosion energy. The presupernova models are taken from a recent survey of massive stellar evolution at solar metallicity supplemented by new calculations at subsolar metallicity. Explosions are simulated by the motion of a piston near the edge of the iron core and the resulting light curves and spectra are calculated using full multi-wavelength radiation transport. Formulae are developed that describe approximately how the model observables (light curve luminosity and duration) scale with the progenitor mass, explosion energy, and radioactive nucleosynthesis. Comparison with observational data shows that the explosion energy of typical supernovae (as measured by kinetic energy at infinity) varies by nearly an order of magnitude—from 0.5 to 4.0 × 1051 ergs, with a typical value of ~0.9 × 1051 ergs. Despite the large variation, the models exhibit a tight relationship between luminosity and expansion velocity, similar to that previously employed empirically to make SNe IIP standardized candles. This relation is explained by the simple behavior of hydrogen recombination in the supernova envelope, but we find a sensitivity to progenitor metallicity and mass that could lead to systematic errors. Additional correlations between light curve luminosity, duration, and color might enable the use of SNe IIP to obtain distances accurate to ~20% using only photometric data.

  20. Large-eddy simulation of flow past urban-like surfaces: A model validation study

    Science.gov (United States)

    Cheng, Wai Chi; Porté-Agel, Fernando

    2013-04-01

    Accurate prediction of atmospheric boundary layer (ABL) flow and its interaction with urban surfaces is critical for understanding the transport of momentum and scalars within and above cities. This, in turn, is essential for predicting the local climate and pollutant dispersion patterns in urban areas. Large-eddy simulation (LES) explicitly resolves the large-scale turbulent eddy motions and, therefore, can potentially provide improved understanding and prediction of flows inside and above urban canopies. This study focuses on developing and validating an LES framework to simulate flow past urban-like surfaces. In particular, large-eddy simulations were performed of flow past an infinite long two-dimensional (2D) building and an array of 3D cubic buildings. An immersed boundary (IB) method was employed to simulate both 2D and 3D buildings. Four subgrid-scale (SGS) models, including (i) the traditional Smagorinsky model, (ii) the Lagrangian dynamic model, (iii) the Lagrangian scale-dependent dynamic model, and (iv) the modulated gradient model, were evaluated using the 2D building case. The simulated velocity streamlines and the vertical profiles of the mean velocities and variances were compared with experimental results. The modulated gradient model shows the best overall agreement with the experimental results among the four SGS models. In particular, the flow recirculation, the reattachment position and the vertical profiles are accurately reproduced with a grid resolution of (Nx)x(Ny)x(Nz) =160x40x160 ((nx)x(nz) =13x16 covering the block). After validating the LES framework with the 2D building case, it was further applied to simulate a boundary-layer flow past a 3D building array. A regular aligned building array with seven rows of cubic buildings was simulated. The building spacings in the streamwise and spanwise directions were both equal to the building height. A developed turbulent boundary-layer flow was used as the incoming flow. The results were

  1. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  2. Towards validation of the NanoDUFLOW nanoparticle fate model for the river Dommel, The Netherlands

    NARCIS (Netherlands)

    Klein, de J.J.M.; Quik, J.T.K.; Bauerlein, P.; Koelmans, A.A.

    2016-01-01

    It is generally acknowledged that fate models for engineered nanoparticles (ENPs) hardly can be validated, given present limitations in analytical methods available for ENPs. Here we report on progress towards validation of the spatially resolved hydrological ENP fate model NanoDUFLOW, by comparing

  3. Validation of a model to estimate personalised screening frequency to monitor diabetic retinopathy

    NARCIS (Netherlands)

    Heijden, A.A. van der; Walraven, I.; Riet, E. van 't; Aspelund, T.; Lund, S.H.; Elders, P.; Polak, B.C.P.; Moll, A.C.; Keunen, J.E.E.; Dekker, J.M.; Nijpels, G.

    2014-01-01

    AIMS/HYPOTHESIS: Our study aimed to validate a model to determine a personalised screening frequency for diabetic retinopathy. METHODS: A model calculating a personalised screening interval for monitoring retinopathy based on patients' risk profile was validated using the data of 3,319 type 2 diabet

  4. Efficient three-dimensional global models for climate studies - Models I and II

    Science.gov (United States)

    Russel, G.; Rind, D.; Lacis, A.; Travis, L.; Stone, P.; Lebedeff, S.; Ruedy, R.; Hansen, J.

    1983-01-01

    Climate modeling based on numerical solution of the fundamental equations for atmospheric structure and motion permits the explicit modeling of physical processes in the climate system and the natural treatment of interactions and feedbacks among parts of the system. The main difficulty concerning this approach is related to the computational requirements. The present investigation is concerned with the development of a grid-point model which is programmed so that both horizontal and vertical resolutions can easily be changed. Attention is given to a description of Model I, the performance of sensitivity experiments by varying parameters, the definition of an improved Model II, and a study of the dependence of climate simulation on resolution with Model II. It is shown that the major features of global climate can be simulated reasonably well with a horizontal resolution as coarse as 1000 km. Such a resolution allows the possibility of long-range climate studies with moderate computer resources.

  5. Photoionization models of the CALIFA H II regions. I. Hybrid models

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.; Sánchez, S. F.; Galbany, L.; García-Benito, R.; Husemann, B.; Marino, R. A.; Mast, D.; Roth, M. M.

    2016-10-01

    Photoionization models of H ii regions require as input a description of the ionizing spectral energy distribution (SED) and of the gas distribution, in terms of ionization parameter U and chemical abundances (e.g., O/H and N/O).A strong degeneracy exists between the hardness of the SED and U, which in turn leads to high uncertainties in the determination of the other parameters, including abundances. One way to resolve the degeneracy is to fix one of the parameters using additional information. For each of the ~20 000 sources of the CALIFA H ii regions catalog, a grid of photoionization models is computed assuming the ionizing SED to be described by the underlying stellar population obtained from spectral synthesis modeling. The ionizing SED is then defined as the sum of various stellar bursts of different ages and metallicities. This solves the degeneracy between the shape of the ionizing SED and U. The nebular metallicity (associated with O/H) is defined using the classical strong line method O3N2 (which gives our models the status of "hybrids"). The remaining free parameters are the abundance ratio N/O and the ionization parameter U, which are determined by looking for the model fitting [N ii]/Hα and [O iii]/Hβ. The models are also selected to fit [O ii]/Hβ. This process leads to a set of ~3200 models that reproduce the three observations simultaneously. We find that the regions associated with young stellar bursts (i.e., ionized by OB stars) are affected by leaking of ionizing photons,the proportion of escaping photons having a median of 80%. The set of photoionization models satisfactorily reproduces the electron temperature derived from the [O iii]λ4363/5007 line ratio. We determine new relations between the nebular parameters, like the ionization parameter U and the [O ii]/[O iii] or [S ii]/[S iii] line ratios. A new relation between N/O and O/H is obtained, mostly compatible with previous empirical determinations (and not with previous results obtained

  6. Determination of validation threshold for coordinate measuring methods using a metrological compatibility model

    Science.gov (United States)

    Gromczak, Kamila; Gąska, Adam; Kowalski, Marek; Ostrowska, Ksenia; Sładek, Jerzy; Gruza, Maciej; Gąska, Piotr

    2017-01-01

    The following paper presents a practical approach to the validation process of coordinate measuring methods at an accredited laboratory, using a statistical model of metrological compatibility. The statistical analysis of measurement results obtained using a highly accurate system was intended to determine the permissible validation threshold values. The threshold value constitutes the primary criterion for the acceptance or rejection of the validated method, and depends on both the differences between measurement results with corresponding uncertainties and the individual correlation coefficient. The article specifies and explains the types of measuring methods that were subject to validation and defines the criterion value governing their acceptance or rejection in the validation process.

  7. Equilibrium modeling of mono and binary sorption of Cu(II and Zn(II onto chitosan gel beads

    Directory of Open Access Journals (Sweden)

    Nastaj Józef

    2016-12-01

    Full Text Available The objective of the work are in-depth experimental studies of Cu(II and Zn(II ion removal on chitosan gel beads from both one- and two-component water solutions at the temperature of 303 K. The optimal process conditions such as: pH value, dose of sorbent and contact time were determined. Based on the optimal process conditions, equilibrium and kinetic studies were carried out. The maximum sorption capacities equaled: 191.25 mg/g and 142.88 mg/g for Cu(II and Zn(II ions respectively, when the sorbent dose was 10 g/L and the pH of a solution was 5.0 for both heavy metal ions. One-component sorption equilibrium data were successfully presented for six of the most useful three-parameter equilibrium models: Langmuir-Freundlich, Redlich-Peterson, Sips, Koble-Corrigan, Hill and Toth. Extended forms of Langmuir-Freundlich, Koble-Corrigan and Sips models were also well fitted to the two-component equilibrium data obtained for different ratios of concentrations of Cu(II and Zn(II ions (1:1, 1:2, 2:1. Experimental sorption data were described by two kinetic models of the pseudo-first and pseudo-second order. Furthermore, an attempt to explain the mechanisms of the divalent metal ion sorption process on chitosan gel beads was undertaken.

  8. An Experimentally Validated SOA Model for High-Bit Rate System Applications

    Institute of Scientific and Technical Information of China (English)

    Hasan I. Saleheen

    2003-01-01

    A comprehensive model of the Semiconductor Optical Amplifier with experimental validation result is presented. This model accounts for various physical behavior of the device which is necessary for high bit-rate system application.

  9. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-03-13

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  10. Social Validity of a Positive Behavior Interventions and Support Model

    Science.gov (United States)

    Miramontes, Nancy Y.; Marchant, Michelle; Heath, Melissa Allen; Fischer, Lane

    2011-01-01

    As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were…

  11. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available 5147,?4February1994,pp.641-646.Elsevier. [6]D.E. Thompson,Verification,Validation,andSolution?Quality inComputationalPhysics:CFDMethodsappliedtoIceSheet Physics,NASAAmesResearchCenter,MailStop269-1, MoffettField,CA 94035-1000. [7]I...

  12. Infrared ship signature prediction, model validation and sky radiance

    NARCIS (Netherlands)

    Neele, F.P.

    2005-01-01

    The increased interest during the last decade in the infrared signature of (new) ships results in a clear need of validated infrared signature prediction codes. This paper presents the results of comparing an in-house developed signature prediction code with measurements made in the 3-5 μm band in b

  13. Reliability and validation of a behavioral model of clinical behavioral formulation

    Directory of Open Access Journals (Sweden)

    Amanda M Muñoz-Martínez

    2011-05-01

    Full Text Available The aim of this study was to determine the reliability and content and predictive validity of a clinical case formulation, developed from a behavioral perspective. A mixed design integrating levels of descriptive analysis and A-B case study with follow-up was used. The study established the reliability of the following descriptive and explanatory categories: (a problem description, (b predisposing factors, (c precipitating factors, (d acquisition and (e inferred mechanism (maintenance. The analysis was performed on cases from 2005 to 2008 formulated with the model derived from the current study. With regards to validity, expert judges considered that the model had content validity. The predictive validity was established across application of model to three case studies. Discussion shows the importance of extending the investigation with the model in other populations and to establish the clinical and concurrent validity of the model.

  14. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    Science.gov (United States)

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included.

  15. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; Ciesielski, Peter; Nimlos, Mark R.; Robichaud, David J.

    2017-01-16

    In this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partial slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. We discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.

  16. Prevalence of depression and validation of the Beck Depression Inventory-II and the Children's Depression Inventory-Short amongst HIV-positive adolescents in Malawi

    Directory of Open Access Journals (Sweden)

    Maria H Kim

    2014-07-01

    Full Text Available Introduction: There is a remarkable dearth of evidence on mental illness in adolescents living with HIV/AIDS, particularly in the African setting. Furthermore, there are few studies in sub-Saharan Africa validating the psychometric properties of diagnostic and screening tools for depression amongst adolescents. The primary aim of this cross-sectional study was to estimate the prevalence of depression amongst a sample of HIV-positive adolescents in Malawi. The secondary aim was to develop culturally adapted Chichewa versions of the Beck Depression Inventory-II (BDI-II and Children's Depression Inventory-II-Short (CDI-II-S and conduct a psychometric evaluation of these measures by evaluating their performance against a structured depression assessment using the Children's Rating Scale, Revised (CDRS-R. Study design: Cross-sectional study. Methods: We enrolled 562 adolescents, 12–18 years of age from two large public HIV clinics in central and southern Malawi. Participants completed two self-reports, the BDI-II and CDI-II-S, followed by administration of the CDRS-R by trained clinicians. Sensitivity, specificity and positive and negative predictive values for various BDI-II and CDI-II-S cut-off scores were calculated with receiver operating characteristics analysis. The area under the curve (AUC was also calculated. Internal consistency was measured by standardized Cronbach's alpha coefficient, and correlation between self-reports and CDRS-R by Spearman's correlation. Results: Prevalence of depression as measured by the CDRS-R was 18.9%. Suicidal ideation was expressed by 7.1% (40 using the BDI-II. The AUC for the BDI-II was 0.82 (95% CI 0.78–0.89 and for the CDI-II-S was 0.75 (95% CI 0.70–0.80. A score of ≥13 in BDI-II achieved sensitivity of >80%, and a score of ≥17 had a specificity of >80%. The Cronbach's alpha was 0.80 (BDI-II and 0.66 (CDI-II-S. The correlation between the BDI-II and CDRS-R was 0.42 (p<0.001 and between the CDI-II

  17. Validation of the point kinetic neutronic model of the PBMR / Deon Marais

    OpenAIRE

    Marais, Deon

    2007-01-01

    This study introduces a new method for the validation of the point kinetic neutronic model of the PBMR. In this study the diffusion equation solution, as implemented in the TlNTE PBMR 268 MW reactor model, replaces the point kinetic model, as implemented in the Flownex V502 PBMR plant model. An indirect coupling method is devised and implemented in an external program called Flownex-Tinte-Interface (FTI) to facilitate the data exchange between these two codes. The validation...

  18. Extending modal testing technology for model validation of engineering structures with sparse nonlinearities: A first case study

    Science.gov (United States)

    delli Carri, Arnaldo; Weekes, B.; Di Maio, Dario; Ewins, D. J.

    2017-02-01

    Modal testing is widely used today as a means of validating theoretical (Finite Element) models for the dynamic analysis of engineering structures, prior to these models being used for optimisation of product design. Current model validation methodology is confined to linear models and is primarily concerned with (i) correcting inaccurate model parameters and (ii) ensuring that sufficient elements are included for these cases, using measured data. Basic experience is that this works quite well, largely because the weaknesses in the models are relatively sparse and, as a result, are usually identifiable and correctable. The current state-of-the-art in linear model validation has contributed to an awareness that residual errors in FE models are increasingly the consequence of some unrepresented nonlinearity in the structure. In these cases, additional, higher order parameters are required to improve the model so that it can represent the nonlinear behaviour. This is opposed to the current practice of simply refining the mesh. Again, these nonlinear features are generally localised, and are often associated with joints. We seek to provide a procedure for extending existing modal testing to enable these nonlinear elements to be addressed using current nonlinear identification methods directed at detection, characterisation, location and then quantification - in order to enhance the elements in an FE model as necessary to describe nonlinear dynamic behaviour. Emphasis is placed on the outcome of these extended methods to relate specifically to the physical behaviour of the relevant components of the structure, rather than to the nonlinear response characteristics that are the result of their presence.

  19. High-speed AMB machining spindle model updating and model validation

    Science.gov (United States)

    Wroblewski, Adam C.; Sawicki, Jerzy T.; Pesch, Alexander H.

    2011-04-01

    High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) have been envisioned to be capable of automated self-identification and self-optimization in efforts to accurately calculate parameters for stable high-speed machining operation. With this in mind, this work presents rotor model development accompanied by automated model-updating methodology followed by updated model validation. The model updating methodology is developed to address the dynamic inaccuracies of the nominal open-loop plant model when compared with experimental open-loop transfer function data obtained by the built in AMB sensors. The nominal open-loop model is altered by utilizing an unconstrained optimization algorithm to adjust only parameters that are a result of engineering assumptions and simplifications, in this case Young's modulus of selected finite elements. Minimizing the error of both resonance and anti-resonance frequencies simultaneously (between model and experimental data) takes into account rotor natural frequencies and mode shape information. To verify the predictive ability of the updated rotor model, its performance is assessed at the tool location which is independent of the experimental transfer function data used in model updating procedures. Verification of the updated model is carried out with complementary temporal and spatial response comparisons substantiating that the updating methodology is effective for derivation of open-loop models for predictive use.

  20. TIME-IGGCAS model validation:Comparisons with empirical models and observations

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The TIME-IGGCAS (Theoretical Ionospheric Model of the Earth in Institute of Ge- ology and Geophysics, Chinese Academy of Sciences) has been developed re- cently on the basis of previous works. To test its validity, we have made compari- sons of model results with other typical empirical ionospheric models (IRI, NeQuick-ITUR, and TItheridge temperature models) and multi-observations (GPS, Ionosondes, Topex, DMSP, FORMOSAT, and CHAMP) in this paper. Several conclu- sions are obtained from our comparisons. The modeled electron density and elec- tron and ion temperatures are quantitatively in good agreement with those of em- pirical models and observations. TIME-IGGCAS can model the electron density variations versus several factors such as local time, latitude, and season very well and can reproduce most anomalistic features of ionosphere including equatorial anomaly, winter anomaly, and semiannual anomaly. These results imply a good base for the development of ionospheric data assimilation model in the future. TIME-IGGCAS underestimates electron temperature and overestimates ion tem- perature in comparison with either empirical models or observations. The model results have relatively large deviations near sunrise time and sunset time and at the low altitudes. These results give us a reference to improve the model and enhance its performance in the future.

  1. Validations of Computational Weld Models: Comparison of Residual Stresses

    Science.gov (United States)

    2010-08-01

    valider la capacité du modèle informatique de calculer les contraintes résiduelles dans des structures réparées par ce type de soudage de rechargement...modèle informatique . Les mesures de la dureté de la plaque laissent supposer que cette propriété varie grandement dans l’espace, c.-à-d. qu’elle est

  2. Linear Logic Validation and Hierarchical Modeling for Interactive Storytelling Control

    OpenAIRE

    Dang, Kim Dung; Pham, Phuong Thao; Champagnat, Ronan; Rabah, Mourad

    2013-01-01

    International audience; The games are typical interactive applications where the system has to react to user actions and behavior with respect to some predefined rules established by the designer. The storytelling allows the interactive system to unfold the scenario of the game story according to these inputs and constraints. In order to improve system's behavior, the scenario should be structured and the system's control should be validated. In this paper, we deal with these two issues. We f...

  3. Are the binary typology models of alcoholism valid in polydrug abusers ?

    Directory of Open Access Journals (Sweden)

    Samuel Pombo

    2015-03-01

    Full Text Available Objective: To evaluate the dichotomy of type I/II and type A/B alcoholism typologies in opiate-dependent patients with a comorbid alcohol dependence problem (ODP-AP. Methods: The validity assessment process comprised the information regarding the history of alcohol use (internal validity, cognitive-behavioral variables regarding substance use (external validity, and indicators of treatment during 6-month follow-up (predictive validity. Results: ODP-AP subjects classified as type II/B presented an early and much more severe drinking problem and a worse clinical prognosis when considering opiate treatment variables as compared with ODP-AP subjects defined as type I/A. Furthermore, type II/B patients endorse more general positive beliefs and expectancies related to the effect of alcohol and tend to drink heavily across several intra- and interpersonal situations as compared with type I/A patients. Conclusions: These findings confirm two different forms of alcohol dependence, recognized as a low-severity/vulnerability subgroup and a high-severity/vulnerability subgroup, in an opiate-dependent population with a lifetime diagnosis of alcohol dependence.

  4. Some guidance on preparing validation plans for the DART Full System Models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy (Sandia National Laboratories, Albuquerque, NM)

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  5. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--7.

    Science.gov (United States)

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well the model reproduces reality). This report describes recommendations for achieving transparency and validation developed by a taskforce appointed by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. Recommendations were developed iteratively by the authors. A nontechnical description--including model type, intended applications, funding sources, structure, intended uses, inputs, outputs, other components that determine function, and their relationships, data sources, validation methods, results, and limitations--should be made available to anyone. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing the same problem), external validity (comparing model results with real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this article contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making.

  6. A framework for computer model validation with a stochastic traffic microsimulators as test-bed

    Energy Technology Data Exchange (ETDEWEB)

    Sacks, J.; Rouphail, N. M.; Park, B. B.

    2001-07-01

    The validation of computer (simulation) models is a crucial element in assessing their utility for science and for policy-making. Often discussed and sometimes practiced informally, the process is straightforward conceptually: data are collected that represent both the inputs and the outputs of the model, the model is run at those inputs, and the output is compared to field data. In reality, complications abound, field data may be expensive, scarce or noisy, the model may be so complex that only a few runs are possible, and uncertainly enters the process at every turn. Even though is inherently a statistical issue, model validation lacks a unifying statistical framework. The need to develop such a framework is compelling, even urgent. The use of computer models by scientists and planners is growing, costs of poor decisions are escalating: and increasing computing power, for both computational and data collection, is magnifying the scale of the issues. Building a framework for validation of computer models requires an assortment of procedures and considerations and recognition of the multiple stages in the development and use of the models. Verification, which encompasses procedures to assure that the computer code is bug-free, is often seen as a predecessor of validation whereas, in fact, it may be enmeshed with validation. Feedback from outcomes of steps in the validation process can impact model development through detection of flaws or gaps- the result is an intertwining of validation with development. We will focus on (the) five essential characteristics of a validation: context, data uncertainty. feedback, and prediction and use a traffic microsimulator model applied to the planning of traffic signal timing as a test-bed. Our goal is to draw attention to the many complexities that need to be considered in order to achieve a successful validation. (Author) 3 refs.

  7. Radiation-hydrodynamical modelling of underluminous type II plateau Supernovae

    CERN Document Server

    Pumo, M L; Spiro, S; Pastorello, A; Benetti, S; Cappellaro, E; Manicò, G; Turatto, M

    2016-01-01

    With the aim of improving our knowledge about the nature of the progenitors of low-luminosity Type II plateau supernovae (LL SNe IIP), we made radiation-hydrodynamical models of the well-sampled LL SNe IIP 2003Z, 2008bk and 2009md. For these three SNe we infer explosion energies of $0.16$-$0.18$ foe, radii at explosion of $1.8$-$3.5 \\times 10^{13}$ cm, and ejected masses of $10$-$11.3$\\Msun. The estimated progenitor mass on the main sequence is in the range $\\sim 13.2$-$15.1$\\Msun\\, for SN 2003Z and $\\sim 11.4$-$12.9$\\Msun\\, for SNe 2008bk and 2009md, in agreement with estimates from observations of the progenitors. These results together with those for other LL SNe IIP modelled in the same way, enable us also to conduct a comparative study on this SN sub-group. The results suggest that: a) the progenitors of faint SNe IIP are slightly less massive and have less energetic explosions than those of intermediate-luminosity SNe IIP, b) both faint and intermediate-luminosity SNe IIP originate from low-energy explo...

  8. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7.

    Science.gov (United States)

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well it reproduces reality). This report describes recommendations for achieving transparency and validation, developed by a task force appointed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM). Recommendations were developed iteratively by the authors. A nontechnical description should be made available to anyone-including model type and intended applications; funding sources; structure; inputs, outputs, other components that determine function, and their relationships; data sources; validation methods and results; and limitations. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing same problem), external validity (comparing model results to real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this paper contains a number of recommendations that were iterated among the authors, as well as the wider modeling task force jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making.

  9. Rorschach score validation as a model for 21st-century personality assessment.

    Science.gov (United States)

    Bornstein, Robert F

    2012-01-01

    Recent conceptual and methodological innovations have led to new strategies for documenting the construct validity of test scores, including performance-based test scores. These strategies have the potential to generate more definitive evidence regarding the validity of scores derived from the Rorschach Inkblot Method (RIM) and help resolve some long-standing controversies regarding the clinical utility of the Rorschach. After discussing the unique challenges in studying the Rorschach and why research in this area is important given current trends in scientific and applied psychology, I offer 3 overarching principles to maximize the construct validity of RIM scores, arguing that (a) the method that provides RIM validation measures plays a key role in generating outcome predictions; (b) RIM variables should be linked with findings from neighboring subfields; and (c) rigorous RIM score validation includes both process-focused and outcome-focused assessments. I describe a 4-step strategy for optimal RIM score derivation (formulating hypotheses, delineating process links, generating outcome predictions, and establishing limiting conditions); and a 4-component template for RIM score validation (establishing basic psychometrics, documenting outcome-focused validity, assessing process-focused validity, and integrating outcome- and process-focused validity data). The proposed framework not only has the potential to enhance the validity and utility of the RIM, but might ultimately enable the RIM to become a model of test score validation for 21st-century personality assessment.

  10. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    Energy Technology Data Exchange (ETDEWEB)

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  11. Assessing Inter-Model Continuity Between the Section II and Section III Conceptualizations of Borderline Personality Disorder in DSM-5.

    Science.gov (United States)

    Evans, Chloe M; Simms, Leonard J

    2017-03-02

    DSM-5 includes 2 competing models of borderline personality disorder (BPD) in Sections II and III. Empirical comparisons between these models are required to understand and improve intermodel continuity. We compared Section III BPD traits to Section II BPD criteria assessed via semistructured interviews in 455 current/recent psychiatric patients using correlation and regression analyses, and also evaluated the incremental predictive power of other Section III traits. In addition, we tested the hypothesis that self-harm would incrementally predict BPD Criterion 5 over the Section III traits. Results supported Section III BPD traits as an adequate representation of traditional BPD symptomatology, although modifications that would increase intermodel continuity were identified. Finally, we found support for the incremental validity of suspiciousness, anhedonia, perceptual dysregulation, and self-harm, suggesting possible gaps in the Section III PD trait definitions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose

    Science.gov (United States)

    Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.

    2014-01-01

    This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applicatio