WorldWideScience

Sample records for validate analytical models

  1. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Lin, E.I.

    1997-01-01

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before

  2. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Modeling Run Test Validity: A Meta-Analytic Approach

    National Research Council Canada - National Science Library

    Vickers, Ross

    2002-01-01

    .... This study utilized data from 166 samples (N = 5,757) to test the general hypothesis that differences in testing methods could account for the cross-situational variation in validity. Only runs >2 km...

  4. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  5. Validated analytical modeling of diesel engine regulated exhaust CO emission rate

    Directory of Open Access Journals (Sweden)

    Waleed F Faris

    2016-06-01

    Full Text Available Albeit vehicle analytical models are often favorable for explainable mathematical trends, no analytical model has been developed of the regulated diesel exhaust CO emission rate for trucks yet. This research unprecedentedly develops and validates for trucks a model of the steady speed regulated diesel exhaust CO emission rate analytically. It has been found that the steady speed–based CO exhaust emission rate is based on (1 CO2 dissociation, (2 the water–gas shift reaction, and (3 the incomplete combustion of hydrocarbon. It has been found as well that the steady speed–based CO exhaust emission rate based on CO2 dissociation is considerably less than the rate that is based on the water–gas shift reaction. It has also been found that the steady speed–based CO exhaust emission rate based on the water–gas shift reaction is the dominant source of CO exhaust emission. The study shows that the average percentage of deviation of the steady speed–based simulated results from the corresponding field data is 1.7% for all freeway cycles with 99% coefficient of determination at the confidence level of 95%. This deviation of the simulated results from field data outperforms its counterpart of widely recognized models such as the comprehensive modal emissions model and VT-Micro for all freeway cycles.

  6. An analytic solution for numerical modeling validation in electromagnetics: the resistive sphere

    Science.gov (United States)

    Swidinsky, Andrei; Liu, Lifei

    2017-11-01

    We derive the electromagnetic response of a resistive sphere to an electric dipole source buried in a conductive whole space. The solution consists of an infinite series of spherical Bessel functions and associated Legendre polynomials, and follows the well-studied problem of a conductive sphere buried in a resistive whole space in the presence of a magnetic dipole. Our result is particularly useful for controlled-source electromagnetic problems using a grounded electric dipole transmitter and can be used to check numerical methods of calculating the response of resistive targets (such as finite difference, finite volume, finite element and integral equation). While we elect to focus on the resistive sphere in our examples, the expressions in this paper are completely general and allow for arbitrary source frequency, sphere radius, transmitter position, receiver position and sphere/host conductivity contrast so that conductive target responses can also be checked. Commonly used mesh validation techniques consist of comparisons against other numerical codes, but such solutions may not always be reliable or readily available. Alternatively, the response of simple 1-D models can be tested against well-known whole space, half-space and layered earth solutions, but such an approach is inadequate for validating models with curved surfaces. We demonstrate that our theoretical results can be used as a complementary validation tool by comparing analytic electric fields to those calculated through a finite-element analysis; the software implementation of this infinite series solution is made available for direct and immediate application.

  7. A Validated Analytical Model for Availability Prediction of IPTV Services in VANETs

    Directory of Open Access Journals (Sweden)

    Bernd E. Wolfinger

    2014-12-01

    Full Text Available In vehicular ad hoc networks (VANETs, besides the original applications typically related to traffic safety, we nowadays can observe an increasing trend toward infotainment applications, such as IPTV services. Quality of experience (QoE, as observed by the end users of IPTV, is highly important to guarantee adequate user acceptance for the service. In IPTV, QoE is mainly determined by the availability of TV channels for the users. This paper presents an efficient and rather generally applicable analytical model that allows one to predict the blocking probability of TV channels, both for channel-switching-induced, as well as for handover-induced blocking events. We present the successful validation of the model by means of simulation, and we introduce a new measure for QoE. Numerous case studies illustrate how the analytical model and our new QoE measure can be applied successfully for the dimensioning of IPTV systems, taking into account the QoE requirements of the IPTV service users in strongly diverse traffic scenarios.

  8. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2016-08-15

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data

  9. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  10. SU-E-T-479: Development and Validation of Analytical Models Predicting Secondary Neutron Radiation in Proton Therapy Applications

    International Nuclear Information System (INIS)

    Farah, J; Bonfrate, A; Donadille, L; Martinetti, F; Trompier, F; Clairand, I; De Olivera, A; Delacroix, S; Herault, J; Piau, S; Vabre, I

    2014-01-01

    Purpose: Test and validation of analytical models predicting leakage neutron exposure in passively scattered proton therapy. Methods: Taking inspiration from the literature, this work attempts to build an analytical model predicting neutron ambient dose equivalents, H*(10), within the local 75 MeV ocular proton therapy facility. MC simulations were first used to model H*(10) in the beam axis plane while considering a closed final collimator and pristine Bragg peak delivery. Next, MC-based analytical model was tested against simulation results and experimental measurements. The model was also expended in the vertical direction to enable a full 3D mapping of H*(10) inside the treatment room. Finally, the work focused on upgrading the literature model to clinically relevant configurations considering modulated beams, open collimators, patient-induced neutron fluctuations, etc. Results: The MC-based analytical model efficiently reproduced simulated H*(10) values with a maximum difference below 10%. In addition, it succeeded in predicting measured H*(10) values with differences <40%. The highest differences were registered at the closest and farthest positions from isocenter where the analytical model failed to faithfully reproduce the high neutron fluence and energy variations. The differences remains however acceptable taking into account the high measurement/simulation uncertainties and the end use of this model, i.e. radiation protection. Moreover, the model was successfully (differences < 20% on simulations and < 45% on measurements) extended to predict neutrons in the vertical direction with respect to the beam line as patients are in the upright seated position during ocular treatments. Accounting for the impact of beam modulation, collimation and the present of a patient in the beam path is far more challenging and conversion coefficients are currently being defined to predict stray neutrons in clinically representative treatment configurations. Conclusion

  11. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    Science.gov (United States)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  12. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    Science.gov (United States)

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Validating and Determining the Weight of Items Used for Evaluating Clinical Governance Implementation Based on Analytic Hierarchy Process Model

    Directory of Open Access Journals (Sweden)

    Elaheh Hooshmand

    2015-10-01

    Full Text Available Background The purpose of implementing a system such as Clinical Governance (CG is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. Methods The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP model. Results The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients’ non-medical needs, complaints and patients’ participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients’ non-medical needs, patients’ participation in the treatment process and research and development. Conclusion The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety.

  14. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    Science.gov (United States)

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  15. An Analytical Model of Leakage Neutron Equivalent Dose for Passively-Scattered Proton Radiotherapy and Validation with Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Christopher; Newhauser, Wayne, E-mail: newhauser@lsu.edu [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Baton Rouge, LA 70803 (United States); Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, LA 70809 (United States); Farah, Jad [Institut de Radioprotection et de Sûreté Nucléaire, Service de Dosimétrie Externe, BP-17, 92262 Fontenay-aux-Roses (France)

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose (H/D) at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  16. An Analytical Model of Leakage Neutron Equivalent Dose for Passively-Scattered Proton Radiotherapy and Validation with Measurements

    International Nuclear Information System (INIS)

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-01-01

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose (H/D) at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation

  17. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    Science.gov (United States)

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  18. SU-E-J-145: Validation of An Analytical Model for in Vivo Range Verification Using GATE Monte Carlo Simulation in Proton Therapy

    International Nuclear Information System (INIS)

    Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K

    2015-01-01

    Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using

  19. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  20. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    Science.gov (United States)

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. The German cervical cancer screening model: development and validation of a decision-analytic model for cervical cancer screening in Germany.

    Science.gov (United States)

    Siebert, Uwe; Sroczynski, Gaby; Hillemanns, Peter; Engel, Jutta; Stabenow, Roland; Stegmaier, Christa; Voigt, Kerstin; Gibis, Bernhard; Hölzel, Dieter; Goldie, Sue J

    2006-04-01

    We sought to develop and validate a decision-analytic model for the natural history of cervical cancer for the German health care context and to apply it to cervical cancer screening. We developed a Markov model for the natural history of cervical cancer and cervical cancer screening in the German health care context. The model reflects current German practice standards for screening, diagnostic follow-up and treatment regarding cervical cancer and its precursors. Data for disease progression and cervical cancer survival were obtained from the literature and German cancer registries. Accuracy of Papanicolaou (Pap) testing was based on meta-analyses. We performed internal and external model validation using observed epidemiological data for unscreened women from different German cancer registries. The model predicts life expectancy, incidence of detected cervical cancer cases, lifetime cervical cancer risks and mortality. The model predicted a lifetime cervical cancer risk of 3.0% and a lifetime cervical cancer mortality of 1.0%, with a peak cancer incidence of 84/100,000 at age 51 years. These results were similar to observed data from German cancer registries, German literature data and results from other international models. Based on our model, annual Pap screening could prevent 98.7% of diagnosed cancer cases and 99.6% of deaths due to cervical cancer in women completely adherent to screening and compliant to treatment. Extending the screening interval from 1 year to 2, 3 or 5 years resulted in reduced screening effectiveness. This model provides a tool for evaluating the long-term effectiveness of different cervical cancer screening tests and strategies.

  2. Proposal and experimental validation of analytical models for seismic and vibration isolation devices in nuclear and non-nuclear facilities

    International Nuclear Information System (INIS)

    Serino, G.; Bonacina, G.; Bettinali, F.

    1993-01-01

    Two analytical-experimental models of HDLRBs having different levels of approximations are presented. Comparison with available experimental data shows that a non-linear hysteretic model, defined by three rubber parameters only, allows a very good complete simulation of the dynamic behavior of the isolation devices. A simpler equivalent linear viscous model reproduces less exactly the experimental behavior, but permits a good prediction of peak response values in the earthquake analysis of an isolated structure, if bearing stiffness and damping parameters are properly selected. The models have been used in preliminary design and subsequent check of the isolation system of two different types of Gas-Insulated Electric Substations (GIS), in view of possible future installation of isolated GISes in areas of high seismic risk. (author)

  3. Heat transfer analytical models for the rapid determination of cooling time in crystalline thermoplastic injection molding and experimental validation

    Science.gov (United States)

    Didier, Delaunay; Baptiste, Pignon; Nicolas, Boyard; Vincent, Sobotka

    2018-05-01

    Heat transfer during the cooling of a thermoplastic injected part directly affects the solidification of the polymer and consequently the quality of the part in term of mechanical properties, geometric tolerance and surface aspect. This paper proposes to mold designers a methodology based on analytical models to provide quickly the time to reach the ejection temperature depending of the temperature and the position of cooling channels. The obtained cooling time is the first step of the thermal conception of the mold. The presented methodology is dedicated to the determination of solidification time of a semi-crystalline polymer slab. It allows the calculation of the crystallization time of the part and is based on the analytical solution of the Stefan problem in a semi-infinite medium. The crystallization is then considered as a phase change with an effective crystallization temperature, which is obtained from Fast Scanning Calorimetry (FSC) results. The crystallization time is then corrected to take the finite thickness of the part into account. To check the accuracy of such approach, the solidification time is calculated by solving the heat conduction equation coupled to the crystallization kinetics of the polymer. The impact of the nature of the contact between the polymer and the mold is evaluated. The thermal contact resistance (TCR) appears as significant parameter that needs to be taken into account in the cooling time calculation. The results of the simplified model including or not TCR are compared in the case of a polypropylene (PP) with experiments carried out with an instrumented mold. Then, the methodology is applied for a part made with PolyEtherEtherKetone (PEEK).

  4. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  5. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    Science.gov (United States)

    Kumar, V. R. Sanal; Sankar, Vigneshwaran; Chandrasekaran, Nichith; Saravanan, Vignesh; Natarajan, Vishnu; Padmanabhan, Sathyan; Sukumaran, Ajith; Mani, Sivabalan; Rameshkumar, Tharikaa; Nagaraju Doddi, Hema Sai; Vysaprasad, Krithika; Sharan, Sharad; Murugesh, Pavithra; Shankar, S. Ganesh; Nejaamtheen, Mohammed Niyasdeen; Baskaran, Roshan Vignesh; Rahman Mohamed Rafic, Sulthan Ariff; Harisrinivasan, Ukeshkumar; Srinivasan, Vivek

    2018-02-01

    A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  6. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    Directory of Open Access Journals (Sweden)

    V. R. Sanal Kumar

    2018-02-01

    Full Text Available A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  7. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 2, structural implementation and validation

    Science.gov (United States)

    Milani, G.; Bertolesi, E.

    2017-07-01

    The simple quasi analytical holonomic homogenization approach for the non-linear analysis of in-plane loaded masonry presented in Part 1 is here implemented at a structural leveland validated. For such implementation, a Rigid Body and Spring Mass model (RBSM) is adopted, relying into a numerical modelling constituted by rigid elements interconnected by homogenized inelastic normal and shear springs placed at the interfaces between adjoining elements. Such approach is also known as HRBSM. The inherit advantage is that it is not necessary to solve a homogenization problem at each load step in each Gauss point, and a direct implementation into a commercial software by means of an external user supplied subroutine is straightforward. In order to have an insight into the capabilities of the present approach to reasonably reproduce masonry behavior at a structural level, non-linear static analyses are conducted on a shear wall, for which experimental and numerical data are available in the technical literature. Quite accurate results are obtained with a very limited computational effort.

  8. Analytic Modeling of Insurgencies

    Science.gov (United States)

    2014-08-01

    Counterinsurgency, Situational Awareness, Civilians, Lanchester 1. Introduction Combat modeling is one of the oldest areas of operations research, dating...Army. The ground-breaking work of Lanchester in 1916 [1] marks the beginning of formal models of conflicts, where mathematical formulas and, later...Warfare model [3], which is a Lanchester - based mathematical model (see more details about this model later on), and McCormick’s Magic Diamond model [4

  9. Valid, legally defensible data from your analytical laboratories

    International Nuclear Information System (INIS)

    Gay, D.D.; Allen, V.C.

    1989-01-01

    This paper discusses the definition of valid, legally defensible data. The authors describe the expectations of project managers and what should be gleaned from the laboratory in regard to analytical data

  10. Maritime Analytics Prototype: Phase 3 Validation

    Science.gov (United States)

    2014-01-01

    different so we need a flexible analysis set hierarchy encoded as directories or groups – like a recipe [C.3.1.4n] Improve the GUI:  Provide more...Problems zooming and panning on the timeline [C.1.2.1c, C.1.2.4e, C.1.3.1c, C.1.1.4c, C.1.1.4b]  Selected the wrong year and then the vessel...Scholtz_VAMetrics_2006.pdf] [21] J. Thomas, and K. Cook , Illuminating the Path, the Research and Development Agenda for Visual analytics: IEEE, 2005. [22

  11. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  12. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  13. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  14. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  15. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  16. Consistency of FMEA used in the validation of analytical procedures

    DEFF Research Database (Denmark)

    Oldenhof, M.T.; van Leeuwen, J.F.; Nauta, Maarten

    2011-01-01

    is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating......In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection...

  17. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  18. Development and Validation of Analytical Method for Losartan ...

    African Journals Online (AJOL)

    Development and Validation of Analytical Method for Losartan-Copper Complex Using UV-Vis Spectrophotometry. ... Tropical Journal of Pharmaceutical Research ... Purpose: To develop a new spectrophotometric method for the analysis of losartan potassium in pharmaceutical formulations by making its complex with ...

  19. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    Science.gov (United States)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  20. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  1. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  2. Development and validation of analytical methods for dietary supplements

    International Nuclear Information System (INIS)

    Sullivan, Darryl; Crowley, Richard

    2006-01-01

    The expanding use of innovative botanical ingredients in dietary supplements and foods has resulted in a flurry of research aimed at the development and validation of analytical methods for accurate measurement of active ingredients. The pressing need for these methods is being met through an expansive collaborative initiative involving industry, government, and analytical organizations. This effort has resulted in the validation of several important assays as well as important advances in the method engineering procedures which have improved the efficiency of the process. The initiative has also allowed researchers to hurdle many of the barricades that have hindered accurate analysis such as the lack of reference standards and comparative data. As the availability for nutraceutical products continues to increase these methods will provide consumers and regulators with the scientific information needed to assure safety and dependable labeling

  3. An analytical model for the prediction of fluid-elastic forces in a rod bundle subjected to axial flow: theory, experimental validation and application to PWR fuel assemblies

    International Nuclear Information System (INIS)

    Beaud, F.

    1997-01-01

    A model predicting the fluid-elastic forces in a bundle of circular cylinders subjected to axial flow is presented in this paper. Whereas previously published models were limited to circular flow channel, the present one allows to take a rectangular flow external boundary into account. For that purpose, an original approach is derived from the standard method of images. This model will eventually be used to predict the fluid-structure coupling between the flow of primary coolant and a fuel assemblies in PWR nuclear reactors. It is indeed of major importance since the flow is shown to induce quite high damping and could therefore mitigate the incidence of an external load like a seismic excitation on the dynamics of the assemblies. The proposed model is validated on two cases from the literature but still needs further comparisons with the experiments being currently carried out on the EDF set-up. The flow has been shown to induce an approximate 12% damping on a PWR fuel assembly, at nominal reactor conditions. The possible grid effect on the fluid-structure coupling has been neglected so far but will soon be investigated at EDF. (author)

  4. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  5. Risk analysis of analytical validations by probabilistic modification of FMEA

    DEFF Research Database (Denmark)

    Barends, D.M.; Oldenhof, M.T.; Vredenbregt, M.J.

    2012-01-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection...... and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring...... of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence...

  6. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  7. Risk analysis by FMEA as an element of analytical validation.

    Science.gov (United States)

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  8. Risk analysis of analytical validations by probabilistic modification of FMEA.

    Science.gov (United States)

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Consistency of FMEA used in the validation of analytical procedures.

    Science.gov (United States)

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Analytical dynamic modeling of fast trilayer polypyrrole bending actuators

    International Nuclear Information System (INIS)

    Amiri Moghadam, Amir Ali; Moavenian, Majid; Tahani, Masoud; Torabi, Keivan

    2011-01-01

    Analytical modeling of conjugated polymer actuators with complicated electro-chemo-mechanical dynamics is an interesting area for research, due to the wide range of applications including biomimetic robots and biomedical devices. Although there have been extensive reports on modeling the electrochemical dynamics of polypyrrole (PPy) bending actuators, mechanical dynamics modeling of the actuators remains unexplored. PPy actuators can operate with low voltage while producing large displacement in comparison to robotic joints, they do not have friction or backlash, but they suffer from some disadvantages such as creep and hysteresis. In this paper, a complete analytical dynamic model for fast trilayer polypyrrole bending actuators has been proposed and named the analytical multi-domain dynamic actuator (AMDDA) model. First an electrical admittance model of the actuator will be obtained based on a distributed RC line; subsequently a proper mechanical dynamic model will be derived, based on Hamilton's principle. The purposed modeling approach will be validated based on recently published experimental results

  11. Analytic processor model for fast design-space exploration

    NARCIS (Netherlands)

    Jongerius, R.; Mariani, G.; Anghel, A.; Dittmann, G.; Vermij, E.; Corporaal, H.

    2015-01-01

    In this paper, we propose an analytic model that takes as inputs a) a parametric microarchitecture-independent characterization of the target workload, and b) a hardware configuration of the core and the memory hierarchy, and returns as output an estimation of processor-core performance. To validate

  12. Development and Validation of a Simple Analytical Model of the Proton Exchange Membrane Fuel Cell (Pemfc) in a Fork-Lift Truck Power System

    DEFF Research Database (Denmark)

    Hosseinzadeh, Elham; Rokni, Masoud

    2013-01-01

    In this study, a general proton exchange membrane fuel cell (PEMFC) model has been developed in order to investigate the balance of plant of a fork-lift truck thermodynamically. The model takes into account the effects of pressure losses, water crossovers, humidity aspects, and voltage overpotent......In this study, a general proton exchange membrane fuel cell (PEMFC) model has been developed in order to investigate the balance of plant of a fork-lift truck thermodynamically. The model takes into account the effects of pressure losses, water crossovers, humidity aspects, and voltage...

  13. Analytical Model for Sensor Placement on Microprocessors

    National Research Council Canada - National Science Library

    Lee, Kyeong-Jae; Skadron, Kevin; Huang, Wei

    2005-01-01

    .... In this paper, we present an analytical model that describes the maximum temperature differential between a hot spot and a region of interest based on their distance and processor packaging information...

  14. An analytical model of iceberg drift

    Science.gov (United States)

    Eisenman, I.; Wagner, T. J. W.; Dell, R.

    2017-12-01

    Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.

  15. Oxcarbazepine: validation and application of an analytical method

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rezende Enéas

    2010-06-01

    Full Text Available Oxcarbazepine (OXC is an important anticonvulsant and mood stabilizing drug. A pharmacopoeial monograph for OXC is not yet available and therefore the development and validation of a new analytical method for quantification of this drug is essential. In the present study, a UV spectrophotometric method for the determination of OXC was developed. The various parameters, such as linearity, precision, accuracy and specificity, were studied according to International Conference on Harmonization Guidelines. Batches of 150 mg OXC capsules were prepared and analyzed using the validated UV method. The formulations were also evaluated for parameters including drug-excipient compatibility, flowability, uniformity of weight, disintegration time, assay, uniformity of content and the amount of drug dissolved during the first hour.Oxcarbazepina (OXC é um fármaco anticonvulsivante e estabilizante do humor. O desenvolvimento e validação de método analítico para quantificação da OXC são de fundamental importância devido à ausência de monografias farmacopéicas oficiais para esse fármaco. Nesse trabalho, um método espectrofotométrico UV para determinação da OXC foi desenvolvido. O método proposto foi validado seguindo os parâmetros de linearidade, precisão, exatidão e especificidade de acordo com as normas da Conferência Internacional de Harmonização. Cápsulas de OXC 150 mg foram preparadas e analisadas utilizando-se o método analítico validado. As formulações foram avaliadas com relação à compatibilidade fármaco-excipientes, fluidez, determinação de peso, tempo de desintegração, doseamento, uniformidade de conteúdo e quantidade do fármaco dissolvido após 60 minutos.

  16. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  17. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  18. Analytic nearest neighbour model for FCC metals

    International Nuclear Information System (INIS)

    Idiodi, J.O.A.; Garba, E.J.D.; Akinlade, O.

    1991-06-01

    A recently proposed analytic nearest-neighbour model for fcc metals is criticised and two alternative nearest-neighbour models derived from the separable potential method (SPM) are recommended. Results for copper and aluminium illustrate the utility of the recommended models. (author). 20 refs, 5 tabs

  19. Analytical eigenstates for the quantum Rabi model

    International Nuclear Information System (INIS)

    Zhong, Honghua; Xie, Qiongtao; Lee, Chaohong; Batchelor, Murray T

    2013-01-01

    We develop a method to find analytical solutions for the eigenstates of the quantum Rabi model. These include symmetric, anti-symmetric and asymmetric analytic solutions given in terms of the confluent Heun functions. Both regular and exceptional solutions are given in a unified form. In addition, the analytic conditions for determining the energy spectrum are obtained. Our results show that conditions proposed by Braak (2011 Phys. Rev. Lett. 107 100401) are a type of sufficiency condition for determining the regular solutions. The well-known Judd isolated exact solutions appear naturally as truncations of the confluent Heun functions. (paper)

  20. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    In this paper, a review is presented of the various methods which ... to make a direct and objective comparison of specific dynamic properties, measured ..... stiffness matrix is available from the analytical model, is that of reducing or condensing.

  1. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  2. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  3. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  4. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  5. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    Science.gov (United States)

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  6. Analytical Modelling and Simulation of Photovoltaic Panels and Arrays

    Directory of Open Access Journals (Sweden)

    H. Bourdoucen

    2007-12-01

    Full Text Available In this paper, an analytical model for PV panels and arrays based on extracted physical parameters of solar cells is developed. The proposed model has the advantage of simplifying mathematical modelling for different configurations of cells and panels without losing efficiency of PV system operation. The effects of external parameters, mainly temperature and solar irradiance have been considered in the modelling. Due to their critical effects on the operation of the panel, effects of series and shunt resistances were also studied. The developed analytical model has been easily implemented, simulated and validated using both Spice and Matlab packages for different series and parallel configurations of cells and panels. The results obtained with these two programs are in total agreement, which make the proposed model very useful for researchers and designers for quick and accurate sizing of PV panels and arrays.

  7. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  8. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  9. Determining passive cooling limits in CPV using an analytical thermal model

    Science.gov (United States)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  10. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  11. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  12. Validating Dart Model

    Directory of Open Access Journals (Sweden)

    Mazur Jolanta

    2014-12-01

    Full Text Available The primary objective of the study was to quantitatively test the DART model, which despite being one of the most popular representations of co-creation concept was so far studied almost solely with qualitative methods. To this end, the researchers developed a multiple measurement scale and employed it in interviewing managers. The statistical evidence for adequacy of the model was obtained through CFA with AMOS software. The findings suggest that the DART model may not be an accurate representation of co-creation practices in companies. From the data analysis it was evident that the building blocks of DART had too much of conceptual overlap to be an effective framework for quantitative analysis. It was also implied that the phenomenon of co-creation is so rich and multifaceted that it may be more adequately captured by a measurement model where co-creation is conceived as a third-level factor with two layers of intermediate latent variables.

  13. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  14. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  15. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  16. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  17. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  18. Perpetual Model Validation

    Science.gov (United States)

    2017-03-01

    25]. This inference process is carried out by a tool referred to as Hynger (Hybrid iNvariant GEneratoR), overviewed in Figure 4, which is a MATLAB ...initially on memory access patterns. A monitoring module will check, at runtime that the observed memory access pattern matches the pattern the software is...necessary. By using the developed approach, a model may be derived from initial tests or simulations , which will then be formally checked at runtime

  19. SIMMER-III analytic thermophysical property model

    International Nuclear Information System (INIS)

    Morita, K; Tobita, Y.; Kondo, Sa.; Fischer, E.A.

    1999-05-01

    An analytic thermophysical property model using general function forms is developed for a reactor safety analysis code, SIMMER-III. The function forms are designed to represent correct behavior of properties of reactor-core materials over wide temperature ranges, especially for the thermal conductivity and the viscosity near the critical point. The most up-to-date and reliable sources for uranium dioxide, mixed-oxide fuel, stainless steel, and sodium available at present are used to determine parameters in the proposed functions. This model is also designed to be consistent with a SIMMER-III model on thermodynamic properties and equations of state for reactor-core materials. (author)

  20. On the validity and practical applicability of derivative analyticity relations

    International Nuclear Information System (INIS)

    Kolar, P.; Fischer, J.

    1983-09-01

    We examine derivative analyticity relations (DAR), which were originally proposed by Bronzan as an alternative to dispersion relations and in which the dispersion integral is replaced by a tangent series of derivatives. We characterize the class of functions satisfying DAR, and show that outside this class the dispersion integral represents a Borel-like sum of tangent series. We point out difficulties connected with the application of DAR. (author)

  1. Organizational Models for Big Data and Analytics

    Directory of Open Access Journals (Sweden)

    Robert L. Grossman

    2014-04-01

    Full Text Available In this article, we introduce a framework for determining how analytics capability should be distributed within an organization. Our framework stresses the importance of building a critical mass of analytics staff, centralizing or decentralizing the analytics staff to support business processes, and establishing an analytics governance structure to ensure that analytics processes are supported by the organization as a whole.

  2. An analytical model for annular flow boiling heat transfer in microchannel heat sinks

    International Nuclear Information System (INIS)

    Megahed, A.; Hassan, I.

    2009-01-01

    An analytical model has been developed to predict flow boiling heat transfer coefficient in microchannel heat sinks. The new analytical model is proposed to predict the two-phase heat transfer coefficient during annular flow regime based on the separated model. Opposing to the majority of annular flow heat transfer models, the model is based on fundamental conservation principles. The model considers the characteristics of microchannel heat sink during annular flow and eliminates using any empirical closure relations. Comparison with limited experimental data was found to validate the usefulness of this analytical model. The model predicts the experimental data with a mean absolute error 8%. (author)

  3. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  4. An analytical model for interactive failures

    International Nuclear Information System (INIS)

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  5. Analytical model of the optical vortex microscope.

    Science.gov (United States)

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  6. Building analytical three-field cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.R.L. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Moraes, P.H.R.S. [ITA-Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP (Brazil); Ferreira, D.A. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, PB (Brazil); Neta, D.C.V. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Estadual da Paraiba, Departamento de Fisica, Campina Grande, PB (Brazil)

    2018-02-15

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called ''extension method''. The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters. (orig.)

  7. Analytical modeling of worldwide medical radiation use

    International Nuclear Information System (INIS)

    Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.

    1987-01-01

    An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually

  8. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  9. An analytical model of flagellate hydrodynamics

    DEFF Research Database (Denmark)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders Peter

    2017-01-01

    solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left......–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming......Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical...

  10. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  11. Analytical model of internally coupled ears

    DEFF Research Database (Denmark)

    Vossen, Christine; Christensen-Dalsgaard, Jakob; Leo van Hemmen, J

    2010-01-01

    Lizards and many birds possess a specialized hearing mechanism: internally coupled ears where the tympanic membranes connect through a large mouth cavity so that the vibrations of the tympanic membranes influence each other. This coupling enhances the phase differences and creates amplitude...... additionally provides the opportunity to incorporate the effect of the asymmetrically attached columella, which leads to the activation of higher membrane vibration modes. Incorporating this effect, the analytical model can explain measurements taken from the tympanic membrane of a living lizard, for example...

  12. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  13. Fission product release from nuclear fuel II. Validation of ASTEC/ELSA on analytical and large scale experiments

    International Nuclear Information System (INIS)

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • A wide range of experiments is presented for the ASTEC/ELSA code validation. • Analytical tests such as AECL, ORNL and VERCORS are considered. • A large-scale experiment, PHEBUS FPT1, is considered. • The good agreement with measurements shows the efficiency of the ASTEC modelling. • Improvements concern the FP release modelling from MOX and high burn-up UO 2 fuels. - Abstract: This article is the second of two articles dedicated to the mechanisms of fission product release from a degraded core. The models of fission product release from nuclear fuel in the ASTEC code have been described in detail in the first part of this work (Brillant et al., this issue). In this contribution, the validation of ELSA, the module of ASTEC that deals with fission product and structural material release from a degraded core, is presented. A large range of experimental tests, with various temperature and conditions for the fuel surrounding atmosphere (oxidising and reducing), is thus simulated with the ASTEC code. The validation database includes several analytical experiments with both bare fuel (e.g. MCE1 experiments) and cladded fuel (e.g. HCE3, VERCORS). Furthermore, the PHEBUS large-scale experiments are used for the validation of ASTEC. The rather satisfactory comparison between ELSA calculations and experimental measurements demonstrates the efficiency of the analytical models to describe fission product release in severe accident conditions

  14. An analytic parton shower. Algorithms, implementation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Sebastian

    2012-06-15

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  15. An analytic parton shower. Algorithms, implementation and validation

    International Nuclear Information System (INIS)

    Schmidt, Sebastian

    2012-06-01

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  16. Analytical modelling of hydrogen transport in reactor containments

    International Nuclear Information System (INIS)

    Manno, V.P.

    1983-09-01

    A versatile computational model of hydrogen transport in nuclear plant containment buildings is developed. The background and significance of hydrogen-related nuclear safety issues are discussed. A computer program is constructed that embodies the analytical models. The thermofluid dynamic formulation spans a wide applicability range from rapid two-phase blowdown transients to slow incompressible hydrogen injection. Detailed ancillary models of molecular and turbulent diffusion, mixture transport properties, multi-phase multicomponent thermodynamics and heat sink modelling are addressed. The numerical solution of the continuum equations emphasizes both accuracy and efficiency in the employment of relatively coarse discretization and long time steps. Reducing undesirable numerical diffusion is addressed. Problem geometry options include lumped parameter zones, one dimensional meshs, two dimensional Cartesian or axisymmetric coordinate systems and three dimensional Cartesian or cylindrical regions. An efficient lumped nodal model is included for simulation of events in which spatial resolution is not significant. Several validation calculations are reported

  17. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    Science.gov (United States)

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  18. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  19. Vibration Based Diagnosis for Planetary Gearboxes Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Liu Hong

    2016-01-01

    Full Text Available The application of conventional vibration based diagnostic techniques to planetary gearboxes is a challenge because of the complexity of frequency components in the measured spectrum, which is the result of relative motions between the rotary planets and the fixed accelerometer. In practice, since the fault signatures are usually contaminated by noises and vibrations from other mechanical components of gearboxes, the diagnostic efficacy may further deteriorate. Thus, it is essential to develop a novel vibration based scheme to diagnose gear failures for planetary gearboxes. Following a brief literature review, the paper begins with the introduction of an analytical model of planetary gear-sets developed by the authors in previous works, which can predict the distinct behaviors of fault introduced sidebands. This analytical model is easy to implement because the only prerequisite information is the basic geometry of the planetary gear-set. Afterwards, an automated diagnostic scheme is proposed to cope with the challenges associated with the characteristic configuration of planetary gearboxes. The proposed vibration based scheme integrates the analytical model, a denoising algorithm, and frequency domain indicators into one synergistic system for the detection and identification of damaged gear teeth in planetary gearboxes. Its performance is validated with the dynamic simulations and the experimental data from a planetary gearbox test rig.

  20. AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES

    International Nuclear Information System (INIS)

    Robinson, Tyler D.; Catling, David C.

    2012-01-01

    We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions for the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.

  1. An analytical model on thermal performance evaluation of counter flow wet cooling tower

    Directory of Open Access Journals (Sweden)

    Wang Qian

    2017-01-01

    Full Text Available This paper proposes an analytical model for simultaneous heat and mass transfer processes in a counter flow wet cooling tower, with the assumption that the enthalpy of the saturated air is a linear function of the water surface temperature. The performance of the proposed analytical model is validated in some typical cases. The validation reveals that, when cooling range is in a certain interval, the proposed model is not only comparable with the accurate model, but also can reduce computational complexity. In addition, with the proposed analytical model, the thermal performance of the counter flow wet cooling towers in power plants is calculated. The results show that the proposed analytical model can be applied to evaluate and predict the thermal performance of counter flow wet cooling towers.

  2. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  3. Analytical model of a burst assembly algorithm for the VBR in the OBS networks

    International Nuclear Information System (INIS)

    Shargabi, M.A.A.; Mellah, H.; Abid, A.

    2008-01-01

    This paper presents a proposed analytical model for the number of bursts aggregated in a period of time in OBS networks. The model considers the case of VBR traffic with two different sending rates, which are SCR and PCR. The model is validated using extensive simulations. Where results from simulations are in total agreement with the results obtained by the proposed model. (author)

  4. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  5. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling; O' Brien, James

    2016-11-01

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and

  6. Approximate analytical modeling of leptospirosis infection

    Science.gov (United States)

    Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani

    2017-11-01

    Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.

  7. Simple Analytic Models of Gravitational Collapse

    Energy Technology Data Exchange (ETDEWEB)

    Adler, R.

    2005-02-09

    Most general relativity textbooks devote considerable space to the simplest example of a black hole containing a singularity, the Schwarzschild geometry. However only a few discuss the dynamical process of gravitational collapse, by which black holes and singularities form. We present here two types of analytic models for this process, which we believe are the simplest available; the first involves collapsing spherical shells of light, analyzed mainly in Eddington-Finkelstein coordinates; the second involves collapsing spheres filled with a perfect fluid, analyzed mainly in Painleve-Gullstrand coordinates. Our main goal is pedagogical simplicity and algebraic completeness, but we also present some results that we believe are new, such as the collapse of a light shell in Kruskal-Szekeres coordinates.

  8. An analytical model of flagellate hydrodynamics

    International Nuclear Information System (INIS)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders

    2017-01-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming trajectory. We find that the longitudinal flagellum is responsible for the average translational motion whereas the transversal flagellum governs the rotational motion. Finally, we show that the transversal flagellum can lead to strong feeding currents to localized capture sites on the cell surface. (paper)

  9. Core monitoring with analytical model adaption

    International Nuclear Information System (INIS)

    Linford, R.B.; Martin, C.L.; Parkos, G.R.; Rahnema, F.; Williams, R.D.

    1992-01-01

    The monitoring of BWR cores has evolved rapidly due to more capable computer systems, improved analytical models and new types of core instrumentation. Coupling of first principles diffusion theory models such as applied to design to the core instrumentation has been achieved by GE with an adaptive methodology in the 3D Minicore system. The adaptive methods allow definition of 'leakage parameters' which are incorporated directly into the diffusion models to enhance monitoring accuracy and predictions. These improved models for core monitoring allow for substitution of traversing in-core probe (TIP) and local power range monitor (LPRM) with calculations to continue monitoring with no loss of accuracy or reduction of thermal limits. Experience in small BWR cores has shown that with one out of three TIP machines failed there was no operating limitation or impact from the substitute calculations. Other capabilities exist in 3D Monicore to align TIPs more accurately and accommodate other types of system measurements or anomalies. 3D Monicore also includes an accurate predictive capability which uses the adaptive results from previous monitoring calculations and is used to plan and optimize reactor maneuvers/operations to improve operating efficiency and reduce support requirements

  10. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  11. Dosimetric validation of the anisotropic analytical algorithm for photon dose calculation: fundamental characterization in water

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Cozzi, Luca

    2006-01-01

    In July 2005 a new algorithm was released by Varian Medical Systems for the Eclipse planning system and installed in our institute. It is the anisotropic analytical algorithm (AAA) for photon dose calculations, a convolution/superposition model for the first time implemented in a Varian planning system. It was therefore necessary to perform validation studies at different levels with a wide investigation approach. To validate the basic performances of the AAA, a detailed analysis of data computed by the AAA configuration algorithm was carried out and data were compared against measurements. To better appraise the performance of AAA and the capability of its configuration to tailor machine-specific characteristics, data obtained from the pencil beam convolution (PBC) algorithm implemented in Eclipse were also added in the comparison. Since the purpose of the paper is to address the basic performances of the AAA and of its configuration procedures, only data relative to measurements in water will be reported. Validation was carried out for three beams: 6 MV and 15 MV from a Clinac 2100C/D and 6 MV from a Clinac 6EX. Generally AAA calculations reproduced very well measured data, and small deviations were observed, on average, for all the quantities investigated for open and wedged fields. In particular, percentage depth-dose curves showed on average differences between calculation and measurement smaller than 1% or 1 mm, and computed profiles in the flattened region matched measurements with deviations smaller than 1% for all beams, field sizes, depths and wedges. Percentage differences in output factors were observed as small as 1% on average (with a range smaller than ±2%) for all conditions. Additional tests were carried out for enhanced dynamic wedges with results comparable to previous results. The basic dosimetric validation of the AAA was therefore considered satisfactory

  12. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  13. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    Science.gov (United States)

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  14. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    Science.gov (United States)

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as

  15. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  16. Analytic modeling of axisymmetric disruption halo currents

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Kellman, A.G.

    1999-01-01

    Currents which can flow in plasma facing components during disruptions pose a challenge to the design of next generation tokamaks. Induced toroidal eddy currents and both induced and conducted poloidal ''halo'' currents can produce design-limiting electromagnetic loads. While induction of toroidal and poloidal currents in passive structures is a well-understood phenomenon, the driving terms and scalings for poloidal currents flowing on open field lines during disruptions are less well established. A model of halo current evolution is presented in which the current is induced in the halo by decay of the plasma current and change in enclosed toroidal flux while being convected into the halo from the core by plasma motion. Fundamental physical processes and scalings are described in a simplified analytic version of the model. The peak axisymmetric halo current is found to depend on halo and core plasma characteristics during the current quench, including machine and plasma dimensions, resistivities, safety factor, and vertical stability growth rate. Two extreme regimes in poloidal halo current amplitude are identified depending on the minimum halo safety factor reached during the disruption. A 'type I' disruption is characterized by a minimum safety factor that remains relatively high (typically 2 - 3, comparable to the predisruption safety factor), and a relatively low poloidal halo current. A 'type II' disruption is characterized by a minimum safety factor comparable to unity and a relatively high poloidal halo current. Model predictions for these two regimes are found to agree well with halo current measurements from vertical displacement event disruptions in DIII-D [T. S. Taylor, K. H. Burrell, D. R. Baker, G. L. Jackson, R. J. La Haye, M. A. Mahdavi, R. Prater, T. C. Simonen, and A. D. Turnbull, open-quotes Results from the DIII-D Scientific Research Program,close quotes in Proceedings of the 17th IAEA Fusion Energy Conference, Yokohama, 1998, to be published in

  17. Analytic Ballistic Performance Model of Whipple Shields

    Science.gov (United States)

    Miller, J. E.; Bjorkman, M. D.; Christiansen, E. L.; Ryan, S. J.

    2015-01-01

    The dual-wall, Whipple shield is the shield of choice for lightweight, long-duration flight. The shield uses an initial sacrificial wall to initiate fragmentation and melt an impacting threat that expands over a void before hitting a subsequent shield wall of a critical component. The key parameters to this type of shield are the rear wall and its mass which stops the debris, as well as the minimum shock wave strength generated by the threat particle impact of the sacrificial wall and the amount of room that is available for expansion. Ensuring the shock wave strength is sufficiently high to achieve large scale fragmentation/melt of the threat particle enables the expansion of the threat and reduces the momentum flux of the debris on the rear wall. Three key factors in the shock wave strength achieved are the thickness of the sacrificial wall relative to the characteristic dimension of the impacting particle, the density and material cohesion contrast of the sacrificial wall relative to the threat particle and the impact speed. The mass of the rear wall and the sacrificial wall are desirable to minimize for launch costs making it important to have an understanding of the effects of density contrast and impact speed. An analytic model is developed here, to describe the influence of these three key factors. In addition this paper develops a description of a fourth key parameter related to fragmentation and its role in establishing the onset of projectile expansion.

  18. Principles of Single-Laboratory Validation of Analytical Methods for Testing the Chemical Composition of Pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Underlying theoretical and practical approaches towards pesticide formulation analysis are discussed, i.e. general principles, performance characteristics, applicability of validation data, verification of method performance, and adaptation of validated methods by other laboratories. The principles of single laboratory validation of analytical methods for testing the chemical composition of pesticides are outlined. Also the theoretical background is described for performing pesticide formulation analysis as outlined in ISO, CIPAC/AOAC and IUPAC guidelines, including methodological characteristics such as specificity, selectivity, linearity, accuracy, trueness, precision and bias. Appendices I–III hereof give practical and elaborated examples on how to use the Horwitz approach and formulae for estimating the target standard deviation towards acceptable analytical repeatability. The estimation of trueness and the establishment of typical within-laboratory reproducibility are treated in greater detail by means of worked-out examples. (author)

  19. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  20. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  1. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  2. Analytical support of plant specific SAMG development validation of SAMG using MELCOR 1.8.5

    International Nuclear Information System (INIS)

    Duspiva, Jiri

    2006-01-01

    They are two NPPs in operation in Czech Republic. Both of NPPs operated in CR have already implemented EOPs, developed under collaboration with the WESE. The project on SAMG development has started and follows the previous one for EOPs also with the WESE as the leading organization. Plant specific SAMGs for the Temelin as well as Dukovany NPPs are based on the WOG generic SAMGs. The analytical support of plant specific SAMGs development is performed by the NRI Rez within the validation process. Basic conditions as well as their filling by NRI Rez are focused on analyst, analytical tools and their applications. More detail description is attended to the approach of the preparation of the MELCOR code application to the evaluation of hydrogen risk, validation of recent set of hydrogen passive autocatalytic recombiners and definition of proposals to amend system of hydrogen removal. Such kind of parametric calculations will request to perform very wide set of runs. It could not be possible with the whole plant model and decoupling of such calculation with storing of mass and energy sources into the containment is only one way. The example of this decoupling for the LOCA scenario is shown. It includes seven sources - heat losses from primary and secondary circuits, fluid blowndown through cold leg break, fission products blowndown through cold leg break, fluid blowndown through break in reactor pressure vessel bottom head, fission products through break in reactor pressure vessel bottom head, melt ejection from reactor pressure vessel to cavity and gas masses and heat losses from corium in cavity. The stand alone containment analysis was tested in two configurations - with or without taking of fission products into account. Testing showed very good agreement of all calculations until lower head failure and acceptable agreement after that. Also some problematic features appeared. The stand alone test with fission product was possible only after the changes in source code

  3. Analytic Models of High-Temperature Hohlraums

    International Nuclear Information System (INIS)

    Stygar, W.A.; Olson, R.E.; Spielman, R.B.; Leeper, R.J.

    2000-01-01

    A unified set of high-temperature-hohlraum models has been developed. For a simple hohlraum, P s = (A s +(1minusα W )A W +A H )σT R 4 + (4Vσ/c)(dT R r /dt) where P S is the total power radiated by the source, A s is the source area, A W is the area of the cavity wall excluding the source and holes in the wall, A H is the area of the holes, σ is the Stefan-Boltzmann constant, T R is the radiation brightness temperature, V is the hohlraum volume, and c is the speed of light. The wall albedo α W triple b ond (T W /T R ) 4 where T W is the brightness temperature of area A W . The net power radiated by the source P N = P S -A S σT R 4 , which suggests that for laser-driven hohlraums the conversion efficiency η CE be defined as P N /P LASER . The characteristic time required to change T R 4 in response to a change in P N is 4V/C((lminusα W )A W +A H ). Using this model, T R , α W , and η CE can be expressed in terms of quantities directly measurable in a hohlraum experiment. For a steady-state hohlraum that encloses a convex capsule, P N = {(1minusα W )A W +A H +((1minusα C )(A S +A W α W )A C /A T = )}σT RC 4 where α C is the capsule albedo, A C is the capsule area, A T triple b ond (A S +A W +A H ), and T RC is the brightness temperature of the radiation that drives the capsule. According to this relation, the capsule-coupling efficiency of the baseline National-Ignition-Facility (NIF) hohlraum is 15% higher than predicted by previous analytic expressions. A model of a hohlraum that encloses a z pinch is also presented

  4. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  5. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  6. Modeling and analytical simulation of a smouldering carbonaceous ...

    African Journals Online (AJOL)

    Modeling and analytical simulation of a smouldering carbonaceous rod. A.A. Mohammed, R.O. Olayiwola, M Eseyin, A.A. Wachin. Abstract. Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. This paper presents an analytical method ...

  7. Finite element and analytical models for twisted and coiled actuator

    Science.gov (United States)

    Tang, Xintian; Liu, Yingxiang; Li, Kai; Chen, Weishan; Zhao, Jianguo

    2018-01-01

    Twisted and coiled actuator (TCA) is a class of recently discovered artificial muscle, which is usually made by twisting and coiling polymer fibers into spring-like structures. It has been widely studied since discovery due to its impressive output characteristics and bright prospects. However, its mathematical models describing the actuation in response to the temperature are still not fully developed. It is known that the large tensile stroke is resulted from the untwisting of the twisted fiber when heated. Thus, the recovered torque during untwisting is a key parameter in the mathematical model. This paper presents a simplified model for the recovered torque of TCA. Finite element method is used for evaluating the thermal stress of the twisted fiber. Based on the results of the finite element analyses, the constitutive equations of twisted fibers are simplified to develop an analytic model of the recovered torque. Finally, the model of the recovered torque is used to predict the deformation of TCA under varying temperatures and validated against experimental results. This work will enhance our understanding of the deformation mechanism of TCAs, which will pave the way for the closed-loop position control.

  8. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    Science.gov (United States)

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Validation of an analytical methodology for the quantitative analysis of petroleum hydrocarbons in marine sediment samples

    Directory of Open Access Journals (Sweden)

    Eloy Yordad Companioni Damas

    2009-01-01

    Full Text Available This work describes a validation of an analytical procedure for the analysis of petroleum hydrocarbons in marine sediment samples. The proposed protocol is able to measure n-alkanes and polycyclic aromatic hydrocarbons (PAH in samples at concentrations as low as 30 ng/g, with a precision better than 15% for most of analytes. The extraction efficiency of fortified sediments varied from 65.1 to 105.6% and 59.7 to 97.8%, for n-alkanes and PAH in the ranges: C16 - C32 and fluoranthene - benzo(apyrene, respectively. The analytical protocol was applied to determine petroleum hydrocarbons in sediments collected from a marine coastal zone.

  10. Validation of the analytical method for sodium dichloroisocyanurate aimed at drinking water disinfection

    International Nuclear Information System (INIS)

    Martinez Alvarez, Luis Octavio; Alejo Cisneros, Pedro; Garcia Pereira, Reynaldo; Campos Valdez, Doraily

    2014-01-01

    Cuba has developed the first effervescent 3.5 mg sodium dichloroisocyanurate tablets as a non-therapeutic active principle. This ingredient releases certain amount of chlorine when dissolved into a litre of water and it can cause adequate disinfection of drinking water ready to be taken after 30 min. Developing and validating an analytical iodometric method applicable to the quality control of effervescent 3.5 mg sodium dichloroisocyanurate tablets

  11. Analytical validation of Gentian NGAL particle-enhanced enhanced turbidimetric immunoassay (PETIA

    Directory of Open Access Journals (Sweden)

    Gian Luca Salvagno

    2017-08-01

    Full Text Available Objectives: This study was designed to validate the analytical performance of the new Gentian particle-enhanced enhanced turbidimetric immunoassay (PETIA for measuring neutrophil gelatinase-associated lipocalin (NGAL in serum samples. Design and methods: Analytical validation of the Gentian NGAL assay was carried out on a Roche Cobas c501 and was based on assessment of limit of blank (LOB, limit of detection (LOD, functional sensitivity, imprecision, linearity and concordance with the BioPorto NGAL test. Results: The LOB and LOD of Gentian NGAL were found to be 3.8 ng/mL and 6.3 ng/mL, respectively. An analytical coefficient of variation (CV of 20% corresponded to a NGAL value of 10 ng/mL. The intra-assay and inter-assay imprecision (CV was between 0.4 and 5.2% and 0.6 and 7.1% and the total imprecision (CV was 3.7%. The linearity was optimal at NGAL concentrations between 37 and 1420 ng/mL (r=1.00; p<0.001. An excellent correlation was observed between values measured with Gentian NGAL and BioPorto NGAL in 74 routine serum samples (r=0.993. The mean percentage bias of the Gentian assay versus the Bioporto assay was +3.1% (95% CI, +1.6% to +4.5%. Conclusions: These results show that Gentian NGAL may be a viable option to other commercial immunoassays for both routine and urgent assessment of serum NGAL. Keywords: Neutrophil gelatinase-associated lipocalin, NGAL, Analytical validation, Acute kidney injury

  12. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    Science.gov (United States)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  13. A hidden analytic structure of the Rabi model

    International Nuclear Information System (INIS)

    Moroz, Alexander

    2014-01-01

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω c and a two-level system with a resonance frequency ω 0 . It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω 0 /(2ω c )=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads to non-classical discrete orthogonal polynomials ϕ k (ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ N (ϵ) of at least the degree N=n+n t . The value of n t >0, which is slowly increasing with n, depends on the required precision. For instance, n t ≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x nl ’s of ϕ n (ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model. •The spectrum is the lattice of discrete

  14. Analytical models for low-power rectenna design

    NARCIS (Netherlands)

    Akkermans, J.A.G.; Beurden, van M.C.; Doodeman, G.J.N.; Visser, H.J.

    2005-01-01

    The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed.

  15. Analytically solvable models of reaction-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Zemskov, E P; Kassner, K [Institut fuer Theoretische Physik, Otto-von-Guericke-Universitaet, Universitaetsplatz 2, 39106 Magdeburg (Germany)

    2004-05-01

    We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

  16. Evaluation and analytical validation of a handheld digital refractometer for urine specific gravity measurement

    Directory of Open Access Journals (Sweden)

    Sara P. Wyness

    2016-08-01

    Full Text Available Objectives: Refractometers are commonly used to determine urine specific gravity (SG in the assessment of hydration status and urine specimen validity testing. Few comprehensive performance evaluations are available demonstrating refractometer capability from a clinical laboratory perspective. The objective of this study was therefore to conduct an analytical validation of a handheld digital refractometer used for human urine SG testing. Design and methods: A MISCO Palm Abbe™ refractometer was used for all experiments, including device familiarization, carryover, precision, accuracy, linearity, analytical sensitivity, evaluation of potential substances which contribute to SG (i.e. “interference”, and reference interval evaluation. A manual refractometer, urine osmometer, and a solute score (sum of urine chloride, creatinine, glucose, potassium, sodium, total protein, and urea nitrogen; all in mg/dL were used as comparative methods for accuracy assessment. Results: Significant carryover was not observed. A wash step was still included as good laboratory practice. Low imprecision (%CV, <0.01 was demonstrated using low and high QC material. Accuracy studies showed strong correlation to manual refractometry. Linear correlation was also demonstrated between SG, osmolality, and solute score. Linearity of Palm Abbe performance was verified with observed error of ≤0.1%. Increases in SG were observed with increasing concentrations of albumin, creatinine, glucose, hemoglobin, sodium chloride, and urea. Transference of a previously published urine SG reference interval of 1.0020–1.0300 was validated. Conclusions: The Palm Abbe digital refractometer was a fast, simple, and accurate way to measure urine SG. Analytical validity was confirmed by the present experiments. Keywords: Specific gravity, Osmolality, Digital refractometry, Hydration, Sports medicine, Urine drug testing, Urine adulteration

  17. Analytical modeling of masonry infilled steel frames

    International Nuclear Information System (INIS)

    Flanagan, R.D.; Jones, W.D.; Bennett, R.M.

    1991-01-01

    A comprehensive program is underway at the Oak Ridge Y-12 Plant to evaluate the seismic capacity of unreinforced hollow clay tile infilled steel frames. This program has three major parts. First, preliminary numerical analyses are conducted to predict behavior, initial cracking loads, ultimate capacity loads, and to identify important parameters. Second, in-situ and laboratory tests are performed to obtain constitutive parameters and confirm predicted behavior. Finally, the analytical techniques are refined based on experimental results. This paper summarizes the findings of the preliminary numerical analyses. A review of current analytical methods was conducted and a subset of these methods was applied to known experimental results. Parametric studies were used to find the sensitivity of the behavior to various parameters. Both in-plane and out-of-plane loads were examined. Two types of out-of-plane behavior were examined, the inertial forces resulting from the mass of the infill panel and the out-of-plane forces resulting from interstory drift. Cracking loads were estimated using linear elastic analysis and an elliptical failure criterion. Calculated natural frequencies were correlated with low amplitude vibration testing. Ultimate behavior under inertial loads was estimated using a modified yield line procedure accounting for membrane stresses. The initial stiffness and ultimate capacity under in-plane loadings were predicted using finite element analyses. Results were compared to experimental data and to failure loads obtained using plastic collapse theory

  18. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    -fractures with flowing water and rock with porosity accessible only by diffusion. The approach furthermore assumes that the properties within the two porosity domains are averaged and also the transfer between the two domains is averaged.It is an important validation issue to verify that effective averaging of parameters can be performed and that suitable values can be derived. It can be shown that matrix interaction properties along a flow path can be integrated to an effective value and if the matrix depth can be considered as infinite, effective values may be derived also for the diffusion and sorption parameters. Thus, it is possible to derive effective parameters for sorbing radionuclides incorporating the total matrix effects along a flow path. This is strictly valid only for cases with no dispersion, but gives a good approximation as long as dispersion does not dominate the transport. FARF31 has been tested and compared with analytical solutions and other models and was found to correspond well within a wide range of input parameters. Support and documentation on how to use FARF31 are two important components to avoid calculation mistakes and obtain trustworthy results. The documentation describes handling and updates of the code. Test cases have been constructed which can be used to check updates and be used as templates. The development of the code is kept under source code control to fulfil quality assurance. The model is deemed to be well suited for performance assessments within the SKB framework

  19. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  20. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    Science.gov (United States)

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used

  1. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  2. Experimental Validation of a Permeability Model for Enrichment Membranes

    International Nuclear Information System (INIS)

    Orellano, Pablo; Brasnarof, Daniel; Florido Pablo

    2003-01-01

    An experimental loop with a real scale diffuser, in a single enrichment-stage configuration, was operated with air at different process conditions, in order to characterize the membrane permeability.Using these experimental data, an analytical geometric-and-morphologic-based model was validated.It is conclude that a new set of independent measurements, i.e. enrichment, is necessary in order to fully characterize diffusers, because of its internal parameters are not univocally determinated with permeability experimental data only

  3. INCAS: an analytical model to describe displacement cascades

    Energy Technology Data Exchange (ETDEWEB)

    Jumel, Stephanie E-mail: stephanie.jumel@edf.fr; Claude Van-Duysen, Jean E-mail: jean-claude.van-duysen@edf.fr

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  4. INCAS: an analytical model to describe displacement cascades

    Science.gov (United States)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  5. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  6. Validation of analytical method for quality control of B12 Vitamin-10 000 injection

    International Nuclear Information System (INIS)

    Botet Garcia, Martha; Garcia Penna, Caridad Margarita; Troche Concepcion, Yenilen; Cannizares Arencibia, Yanara; Moreno Correoso, Barbara

    2009-01-01

    Analytical method reported by USA Pharmacopeia was validated for quality control of injectable B 1 2 Vitamin (10 000 U) by UV spectrophotometry because this is a simpler and low-cost method allowing quality control of finished product. Calibration curve was graphed at 60 to 140% interval, where it was linear with a correlation coefficient similar to 0, 9999; statistical test for interception and slope was considered non-significant. There was a recovery of 99.7 % in study concentrations interval where the Cochran (G) and Student(t) test were not significant too. Variation coefficient in repetition study was similar to 0.59 % for the 6 assayed replies, whereas in intermediate precision analysis, the Fisher and Student tests were not significant. Analytical method was linear, precise, specific and exact in study concentrations interval

  7. Analytical method validation for quality control and the study of the 50 mg Propylthiouracil stability

    International Nuclear Information System (INIS)

    Valdes Bendoyro, Maria Olga; Garcia Penna, Caridad Margarita; Fernandez, Juan Lugones; Garcia Borges, Lisandra; Martinez Espinosa, Vivian

    2010-01-01

    A high-performance liquid chromatography analytical method was developed and validated for the quality control and stability studies of 50 mg Propylthiouracil tablets. Method is based in active principle separation through a 100 RP-18 RP-18 (5 μm) (250 x 4 mm) Lichrospher chromatography with UV detection to 272 nm, using a mobile phase composed by a ungaseous mixture of a 0.025 M buffer solution-monobasic potassium phosphate to pH= 4,6 ad acetonitrile in a 80:20 ratio with a flux speed of 0,5 mL/min. Analytical method was linear, precise, specific and exact in the study concentrations interval

  8. Analytical and numerical modeling of sandbanks dynamics

    NARCIS (Netherlands)

    Idier, Deborah; Astruc, Dominique

    2003-01-01

    Linear and nonlinear behavior of large-scale underwater bedform patterns like sandbanks are studied using linear stability analysis and numerical modeling. The model is based on depth-integrated hydrodynamics equations with a quadratic bottom friction law and a bed load sediment transport model

  9. An analytical model for beaconing in VANETs

    NARCIS (Netherlands)

    van Eenennaam, Martijn; Remke, Anne Katharina Ingrid; Heijenk, Geert

    2012-01-01

    IEEE 802.11 CSMA/CA is generally considered to be well-understood, and many detailed models are available. However, most models focus on Unicast in small-scale W-LAN scenarios. When modelling beaconing in VANETs, the Broadcast nature and the (potentially) large number of nodes cause phenomena

  10. Analytic Models for Sunlight Charging of a Rapidly Spinning Satellite

    National Research Council Canada - National Science Library

    Tautz, Maurice

    2003-01-01

    ... photoelectrons can be blocked by local potential barriers. In this report, we discuss two analytic models for sunlight charging of a rapidly spinning spherical satellite, both of which are based on blocked photoelectron currents...

  11. Analytical validation of operator actions in case of primary to secondary leakage for VVER-1000/V320

    Energy Technology Data Exchange (ETDEWEB)

    Andreeva, M., E-mail: m_andreeva@inrne.bas.bg; Groudev, P., E-mail: pavlinpg@inrne.bas.bg; Pavlova, M., E-mail: pavlova@inrne.bas.bg

    2015-12-15

    Highlights: • We validate operator actions in case of primary to secondary leakage. • We perform four scenarios related to SGTR accident for VVER-1000/V320. • The reference power plant for the analyses is Unit 6 at Kozloduy NPP. • The RELAP5/MOD 3.2 computer code is used in performing the analyses. • The analyses confirm the effectiveness of operator actions during PRISE. - Abstract: This paper presents the results of analytical validation of operator actions in case of “Steam Generator Tube Rupture” (SGTR) for VVER-1000/V320 units at Kozloduy Nuclear Power Plant (KNPP), done during the development of Symptom Based Emergency Operating Procedures (SB EOPs) for this plant. The purpose of the analyses is to demonstrate the ability to terminate primary to secondary leakage and to indicate an effective strategy for preventing secondary leakage to the environment and in this way to prevent radiological release to the environment. Following depressurization and cooldown of reactor coolant system (RCS) with isolation of the affected steam generator (SG), in these analyses are validated options for post-SGTR cooldown by: • back up filling the ruptured SG; • using letdown system in the affected SG and • by opening Fast Acting Isolation Valve (FAIV) and using Steam Dump Facility to the Condenser (BRU-K). The results of the thermal-hydraulic analyses have been used to assist KNPP specialists in analytical validation of EOPs. The RELAP5/MOD3.2 computer code has been used for the analyses in a VVER-1000 Nuclear Power Plant (NPP) model. A model of VVER-1000 based on Unit 6 of Kozloduy NPP has been developed for the thermal-hydraulics code RELAP5/MOD3.2 at the Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Sciences (INRNE-BAS). This paper is possible through the participation of leading specialists from KNPP.

  12. Analytical Models Development of Compact Monopole Vortex Flows

    Directory of Open Access Journals (Sweden)

    Pavlo V. Lukianov

    2017-09-01

    Conclusions. The article contains series of the latest analytical models that describe both laminar and turbulent dynamics of monopole vortex flows which have not been reflected in traditional publications up to the present. The further research must be directed to search of analytical models for the coherent vortical structures in flows of viscous fluids, particularly near curved surfaces, where known in hydromechanics “wall law” is disturbed and heat and mass transfer anomalies take place.

  13. Analytical solution of dispersion relations for the nuclear optical model

    Energy Technology Data Exchange (ETDEWEB)

    VanderKam, J.M. [Center for Communications Research, Thanet Road, Princeton, NJ 08540 (United States); Weisel, G.J. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States); Penn State Altoona, 3000 Ivyside Park, Altoona, PA 16601-3760 (United States); Tornow, W. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States)

    2000-12-01

    Analytical solutions of dispersion integral relations, linking the real and imaginary parts of the nuclear optical model, have been derived. These are displayed for some widely used forms of the volume- and surface-absorptive nuclear potentials. When the analytical solutions are incorporated into the optical-model search code GENOA, replacing a numerical integration, the code runs three and a half to seven times faster, greatly aiding the analysis of direct-reaction, elastic scattering data. (author)

  14. Analytical model for local scour prediction around hydrokinetic turbine foundations

    Science.gov (United States)

    Musa, M.; Heisel, M.; Hill, C.; Guala, M.

    2017-12-01

    Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi

  15. Analytical validation of a new point-of-care assay for serum amyloid A in horses.

    Science.gov (United States)

    Schwartz, D; Pusterla, N; Jacobsen, S; Christopher, M M

    2018-01-17

    Serum amyloid A (SAA) is a major acute phase protein in horses. A new point-of-care (POC) test for SAA (Stablelab) is available, but studies evaluating its analytical accuracy are lacking. To evaluate the analytical performance of the SAA POC test by 1) determining linearity and precision, 2) comparing results in whole blood with those in serum or plasma, and 3) comparing POC results with those obtained using a previously validated turbidimetric immunoassay (TIA). Assay validation. Analytical validation of the POC test was done in accordance with American Society of Veterinary Clinical Pathology guidelines using residual equine serum/plasma and whole blood samples from the Clinical Pathology Laboratory at the University of California-Davis. A TIA was used as the reference method. We also evaluated the effect of haematocrit (HCT). The POC test was linear for SAA concentrations of up to at least 1000 μg/mL (r = 0.991). Intra-assay CVs were 13, 18 and 15% at high (782 μg/mL), intermediate (116 μg/mL) and low (64 μg/mL) concentrations. Inter-assay (inter-batch) CVs were 45, 14 and 15% at high (1372 μg/mL), intermediate (140 μg/mL) and low (56 μg/mL) concentrations. SAA results in whole blood were significantly lower than those in serum/plasma (P = 0.0002), but were positively correlated (r = 0.908) and not affected by HCT (P = 0.261); proportional negative bias was observed in samples with SAA>500 μg/mL. The difference between methods exceeded the 95% confidence interval of the combined imprecision of both methods (15%). Analytical validation could not be performed in whole blood, the sample most likely to be used stall side. The POC test has acceptable accuracy and precision in equine serum/plasma with SAA concentrations of up to at least 1000 μg/mL. Low inter-batch precision at high concentrations may affect serial measurements, and the use of the same test batch and sample type (serum/plasma or whole blood) is recommended. Comparison of results between the

  16. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  17. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  18. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  19. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    Science.gov (United States)

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  20. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  1. Optimization of turning process through the analytic flank wear modelling

    Science.gov (United States)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  2. Analytic model of heat deposition in spallation neutron target

    International Nuclear Information System (INIS)

    Findlay, D.J.S.

    2015-01-01

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  3. Analytic model of heat deposition in spallation neutron target

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, D.J.S.

    2015-12-11

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  4. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  5. Meta-analytic structural equation modelling

    CERN Document Server

    Jak, Suzanne

    2015-01-01

    This book explains how to employ MASEM, the combination of meta-analysis (MA) and structural equation modelling (SEM). It shows how by using MASEM, a single model can be tested to explain the relationships between a set of variables in several studies. This book gives an introduction to MASEM, with a focus on the state of the art approach: the two stage approach of Cheung and Cheung & Chan. Both, the fixed and the random approach to MASEM are illustrated with two applications to real data. All steps that have to be taken to perform the analyses are discussed extensively. All data and syntax files are available online, so that readers can imitate all analyses. By using SEM for meta-analysis, this book shows how to benefit from all available information from all available studies, even if few or none of the studies report about all relationships that feature in the full model of interest.

  6. Analytical modeling of inverted annular film boiling

    International Nuclear Information System (INIS)

    Analytis, G.T.; Yadigaroglu, G.

    1987-01-01

    By employing a two-fluid formulation similar to the one used in the most recent LWR accident analysis codes, a model for the Inverted Annular Film Boiling region is developed. The conservation equations, together with appropriate closure relations are solved numerically. Successful comparisons are made between model predictions and heat transfer coefficient distributions measured in a series of single-tube reflooding experiments. Generally, the model predicts correctly the dependence of the heat transfer coefficient on liquid subcooling and flow rate; for some cases, however, heat transfer is still under-predicted, and an enhancement of the heat exchange from the liquid-vapour interface to the bulk of the liquid is required. The importance of the initial conditions at the quench front is also discussed. (orig.)

  7. Analytical modeling of inverted annular film boiling

    International Nuclear Information System (INIS)

    Analytis, G.T.; Yadigaroglu, G.

    1985-01-01

    By employing a two-fluid formulation similar to the one used in the most recent LWR accident analysis codes, a model for the Inverted Annular Film Boiling region is developed. The conservation equations, together with appropriate constitutive relations are solved numerically and successful comparisons are made between model predictions and heat transfer coefficient distributions measured in a series of single-tube reflooding experiments. The model predicts generally correctly the dependence of the heat transfer coefficient on liquid subcooling and flow rate, through, for some cases, heat transfer is still under-predicted, and an enhancement of the heat exchange from the liquid-vapour interface to the bulk of the liquid is required

  8. Analytical study of anisotropic compact star models

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, B.V. [Bulgarian Academy of Science, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2017-11-15

    A simple classification is given of the anisotropic relativistic star models, resembling the one of charged isotropic solutions. On the ground of this database, and taking into account the conditions for physically realistic star models, a method is proposed for generating all such solutions. It is based on the energy density and the radial pressure as seeding functions. Numerous relations between the realistic conditions are found and the need for a graphic proof is reduced just to one pair of inequalities. This general formalism is illustrated with an example of a class of solutions with linear equation of state and simple energy density. It is found that the solutions depend on three free constants and concrete examples are given. Some other popular models are studied with the same method. (orig.)

  9. Analytical validation of an ultra low-cost mobile phone microplate reader for infectious disease testing.

    Science.gov (United States)

    Wang, Li-Ju; Naudé, Nicole; Demissie, Misganaw; Crivaro, Anne; Kamoun, Malek; Wang, Ping; Li, Lei

    2018-07-01

    Most mobile health (mHealth) diagnostic devices for laboratory tests only analyze one sample at a time, which is not suitable for large volume serology testing, especially in low-resource settings with shortage of health professionals. In this study, we developed an ultra-low-cost clinically-accurate mobile phone microplate reader (mReader), and clinically validated this optical device for 12 infectious disease tests. The mReader optically reads 96 samples on a microplate at one time. 771 de-identified patient samples were tested for 12 serology assays for bacterial/viral infections. The mReader and the clinical instrument blindly read and analyzed all tests in parallel. The analytical accuracy and the diagnostic performance of the mReader were evaluated across the clinical reportable categories by comparison with clinical laboratorial testing results. The mReader exhibited 97.59-99.90% analytical accuracy and envision the mReader can benefit underserved areas/populations and low-resource settings in rural clinics/hospitals at a low cost (~$50 USD) with clinical-level analytical quality. It has the potential to improve health access, speed up healthcare delivery, and reduce health disparities and education disparities by providing access to a low-cost spectrophotometer. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Analytical model for cable tray fires

    International Nuclear Information System (INIS)

    Clarke, R.K.

    1975-09-01

    A model for cable tray fires based on buoyant plume theory is presented. Using the model in conjunction with empirical data on size of natural fires and burning rate of cellulosic materials, estimates are made of the heat flux as a function of vertical and horizontal distance from a tray fire. Both local fires and fires extending along a significant length of tray are considered. For the particular set of fire parameters assumed in the calculations, the current tray separation criteria of five feet vertical and three feet horizontal are found to be marginal for local fires and too small to prevent fire spread for extended tray fires. 8 references. (auth)

  11. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  12. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  13. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  14. Unjamming in models with analytic pairwise potentials

    NARCIS (Netherlands)

    Kooij, S.; Lerner, E.

    Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the

  15. Environmental concentrations of engineered nanomaterials: Review of modeling and analytical studies

    International Nuclear Information System (INIS)

    Gottschalk, Fadri; Sun, TianYin; Nowack, Bernd

    2013-01-01

    Scientific consensus predicts that the worldwide use of engineered nanomaterials (ENM) leads to their release into the environment. We reviewed the available literature concerning environmental concentrations of six ENMs (TiO 2 , ZnO, Ag, fullerenes, CNT and CeO 2 ) in surface waters, wastewater treatment plant effluents, biosolids, sediments, soils and air. Presently, a dozen modeling studies provide environmental concentrations for ENM and a handful of analytical works can be used as basis for a preliminary validation. There are still major knowledge gaps (e.g. on ENM production, application and release) that affect the modeled values, but over all an agreement on the order of magnitude of the environmental concentrations can be reached. True validation of the modeled values is difficult because trace analytical methods that are specific for ENM detection and quantification are not available. The modeled and measured results are not always comparable due to the different forms and sizes of particles that these two approaches target. -- Highlights: •Modeled environmental concentrations of engineered nanomaterials are reviewed. •Measured environmental concentrations of engineered nanomaterials are reviewed. •Possible validation of modeled data by measurements is critically evaluated. •Different approaches in modeling and measurement methods complicate validation. -- Modeled and measured environmental concentrations of engineered nanomaterials are reviewed and critically discussed

  16. Analytical and numerical modeling for flexible pipes

    Science.gov (United States)

    Wang, Wei; Chen, Geng

    2011-12-01

    The unbonded flexible pipe of eight layers, in which all the layers except the carcass layer are assumed to have isotropic properties, has been analyzed. Specifically, the carcass layer shows the orthotropic characteristics. The effective elastic moduli of the carcass layer have been developed in terms of the influence of deformation to stiffness. With consideration of the effective elastic moduli, the structure can be properly analyzed. Also the relative movements of tendons and relative displacements of wires in helical armour layer have been investigated. A three-dimensional nonlinear finite element model has been presented to predict the response of flexible pipes under axial force and torque. Further, the friction and contact of interlayer have been considered. Comparison between the finite element model and experimental results obtained in literature has been given and discussed, which might provide practical and technical support for the application of unbonded flexible pipes.

  17. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  18. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  19. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  20. Unjamming in models with analytic pairwise potentials

    Science.gov (United States)

    Kooij, Stefan; Lerner, Edan

    2017-06-01

    Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the coordination number z , which has an unambiguous definition in these cases. Pairwise potentials without a sharp cutoff in the interaction range have not been studied in this context, but should in fact be considered to understand the relevance of the unjamming phenomenology in systems where such a cutoff is not present. In this work we explore two systems with such interactions: an inverse power law and an exponentially decaying pairwise potential, with the control parameters being the exponent (of the inverse power law) for the former and the number density for the latter. Both systems are shown to exhibit the characteristic features of the unjamming transition, among which are the vanishing of the shear-to-bulk modulus ratio and the emergence of an excess of low-frequency vibrational modes. We establish a relation between the pressure-to-bulk modulus ratio and the distance to unjamming in each of our model systems. This allows us to predict the dependence of other key observables on the distance to unjamming. Our results provide the means for a quantitative estimation of the proximity of generic glass-forming models to the unjamming transition in the absence of a clear-cut definition of the coordination number and highlight the general irrelevance of nonaffine contributions to the bulk modulus.

  1. Analytical model for fast-shock ignition

    International Nuclear Information System (INIS)

    Ghasemi, S. A.; Farahbod, A. H.; Sobhanian, S.

    2014-01-01

    A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3  micron and the shock ignitor energy weight factor about 0.25

  2. Short-Term Predictive Validity of Cluster Analytic and Dimensional Classification of Child Behavioral Adjustment in School

    Science.gov (United States)

    Kim, Sangwon; Kamphaus, Randy W.; Baker, Jean A.

    2006-01-01

    A constructive debate over the classification of child psychopathology can be stimulated by investigating the validity of different classification approaches. We examined and compared the short-term predictive validity of cluster analytic and dimensional classifications of child behavioral adjustment in school using the Behavior Assessment System…

  3. Validation of multivariate classification methods using analytical fingerprints – concept and case study on organic feed for laying hens

    NARCIS (Netherlands)

    Alewijn, Martin; Voet, van der Hilko; Ruth, van Saskia

    2016-01-01

    Multivariate classification methods based on analytical fingerprints have found many applications in the food and feed area, but practical applications are still scarce due to a lack of a generally accepted validation procedure. This paper proposes a new approach for validation of this type of

  4. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F., E-mail: fabien.formosa@univ-savoie.f [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France)

    2011-05-15

    Research highlights: {yields} The free piston Stirling behaviour relies on its thermal and dynamic features. {yields} A global semi-analytical model for preliminary design is developed. {yields} The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  5. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    International Nuclear Information System (INIS)

    Formosa, F.

    2011-01-01

    Research highlights: → The free piston Stirling behaviour relies on its thermal and dynamic features. → A global semi-analytical model for preliminary design is developed. → The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  6. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    Science.gov (United States)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  7. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  8. Development and Validation Dissolution Analytical Method of Nimesulide beta-Cyclodextrin 400 mg Tablet

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Carvalho Pereira

    2016-10-01

    Full Text Available The nimesulide (N-(4-nitro-2-phenoxyphenylmethanesulfonamide belongs to the class of non-steroidal anti-inflammatory drugs (NSAIDs and category II of the biopharmaceutical classification, The complexation of nimesulide with b-cyclodextrin is a pharmacological strategy to increase the solubility of the drug The objective of this study was to develop and validate an analytical methodology for dissolving the nimesulide beta-cyclodextrin 400 mg tablet and meets the guidelines of ANVISA for drug registration purposes. Once developed, the dissolution methodology was validated according to the RE of parameters no.  899/2003. In the development of the method it was noted that the duration of the dissolution test was 60 minutes, the volume and the most suitable dissolution medium was 900 mL of aqueous solution of sodium lauryl sulfate 1% (w/ v. It was also noted that rotation of 100 rpm and the paddle apparatus was the most appropriate to evaluate the dissolution of the drug. Spectrophotometric methodology was used to quantify the percentage of dissolved drug. The wavelength was 390 nm using the quantification. The validation of the methodology, system suitability parameters, specificity/selectivity, linearity, precision, accuracy and robustness were satisfactory and proved that the developed dissolution methodology was duly executed. DOI: http://dx.doi.org/10.17807/orbital.v8i5.827

  9. Validation of analytical methods for the stability studies of naproxen suppositories for infant and adult use

    International Nuclear Information System (INIS)

    Rodriguez Hernandez, Yaslenis; Suarez Perez, Yania; Garcia Pulpeiro, Oscar

    2011-01-01

    Analytical and validating studies were performed in this paper, with a view to using them in the stability studies of the future formulations of naproxen suppositories for children and adults. The most influential factors in the naproxen stability were determined, that is, the major degradation occurred in acid medium, oxidative medium and by light action. One high-performance liquid chromatography-based method was evaluated, which proved to be adequate to quantify naproxen in suppositories and was selective against degradation products. The quantification limit was 3,480 μg, so it was valid for these studies. Additionally, the parameters specificity for stability, detection and quantification limits were evaluated for the direct semi-aqueous acid-base method, which was formerly validated for the quality control and showed satisfactory results. Nevertheless, the volumetric methods were not regarded as stability indicators; therefore, this method will be used along with the chromatographic methods of choice, that is, thin-layer chromatography and highperformance liquid chromatography, to determine the degradation products

  10. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  11. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  13. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  14. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  15. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  16. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  17. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    Science.gov (United States)

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  18. A Unified Channel Charges Expression for Analytic MOSFET Modeling

    Directory of Open Access Journals (Sweden)

    Hugues Murray

    2012-01-01

    Full Text Available Based on a 1D Poissons equation resolution, we present an analytic model of inversion charges allowing calculation of the drain current and transconductance in the Metal Oxide Semiconductor Field Effect Transistor. The drain current and transconductance are described by analytical functions including mobility corrections and short channel effects (CLM, DIBL. The comparison with the Pao-Sah integral shows excellent accuracy of the model in all inversion modes from strong to weak inversion in submicronics MOSFET. All calculations are encoded with a simple C program and give instantaneous results that provide an efficient tool for microelectronics users.

  19. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  20. Analytical validation and reference intervals for freezing point depression osmometer measurements of urine osmolality in dogs.

    Science.gov (United States)

    Guerrero, Samantha; Pastor, Josep; Tvarijonaviciute, Asta; Cerón, José Joaquín; Balestra, Graziano; Caldin, Marco

    2017-11-01

    Urine osmolality (UOsm) is considered the most accurate measure of urine concentration and is used to assess body fluid homeostasis and renal function. We performed analytical validation of freezing point depression measurement of canine UOsm, to establish reference intervals (RIs) and to determine the effect of age, sex, and reproductive status on UOsm in dogs. Clinically healthy dogs ( n = 1,991) were retrospectively selected and stratified in groups by age (young [0-12 mo], adults [13-84 mo], and seniors [>84 mo]), sex (females and males), and reproductive status (intact and neutered). RIs were calculated for each age group. Intra- and inter-assay coefficients of variation were dogs, and 366-2,178 mOsm/kg in seniors. Senior dogs had a significantly lower UOsm than young and adult dogs ( p dogs ( p dogs.

  1. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    Science.gov (United States)

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  3. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    Science.gov (United States)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  4. A three-dimensional (3D) analytical model for subthreshold characteristics of uniformly doped FinFET

    Science.gov (United States)

    Tripathi, Shweta; Narendar, Vadthiya

    2015-07-01

    In this paper, three dimensional (3D) analytical model for subthreshold characteristics of doped FinFET has been presented. The separation of variables technique is used to solve the 3D Poisson's equation analytically with appropriate boundary conditions so as to obtain the expression for channel potential. The thus obtained potential distribution function has been employed in deriving subthreshold current and subthreshold slope model. The channel potential characteristics have been studied as a function of various device parameters such as gate length, gate oxide thickness and channel doping. The proposed analytical model results have been validated by comparing with the simulation data obtained by the 3D device simulator ATLAS™ from Silvaco.

  5. Validation of Pressure Drop Models for PHWR-type Fuel Elements

    International Nuclear Information System (INIS)

    Brasnarof Daniel; Daverio, H.

    2003-01-01

    In the present work an one-dimensional pressure drop analytical model and the COBRA code, are validated with experimental data of CANDU and Atucha fuel bundles in low and high pressure experimental test loops.Models have very good agreement with the experimental data, having less than 5 % of discrepancy. The analytical model results were compared with COBRA code results, having small difference between them in a wide range of pressure, temperature and mass flow

  6. Elliptic-cylindrical analytical flux-rope model for ICMEs

    Science.gov (United States)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  7. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  8. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  9. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  10. An analytic model for gate-all-around silicon nanowire tunneling field effect transistors

    International Nuclear Information System (INIS)

    Liu Ying; He Jin; Chan Mansun; Ye Yun; Zhao Wei; Wu Wen; Deng Wan-Ling; Wang Wen-Ping; Du Cai-Xia

    2014-01-01

    An analytical model of gate-all-around (GAA) silicon nanowire tunneling field effect transistors (NW-TFETs) is developted based on the surface potential solutions in the channel direction and considering the band to band tunneling (BTBT) efficiency. The three-dimensional Poisson equation is solved to obtain the surface potential distributions in the partition regions along the channel direction for the NW-TFET, and a tunneling current model using Kane's expression is developed. The validity of the developed model is shown by the good agreement between the model predictions and the TCAD simulation results. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  11. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  12. Surveillance of emerging drugs of abuse in Hong Kong: validation of an analytical tool.

    Science.gov (United States)

    Tang, Magdalene H Y; Ching, C K; Tse, M L; Ng, Carol; Lee, Caroline; Chong, Y K; Wong, Watson; Mak, Tony W L

    2015-04-01

    To validate a locally developed chromatography-based method to monitor emerging drugs of abuse whilst performing regular drug testing in abusers. Cross-sectional study. Eleven regional hospitals, seven social service units, and a tertiary level clinical toxicology laboratory in Hong Kong. A total of 972 drug abusers and high-risk individuals were recruited from acute, rehabilitation, and high-risk settings between 1 November 2011 and 31 July 2013. A subset of the participants was of South Asian ethnicity. In total, 2000 urine or hair specimens were collected. Proof of concept that surveillance of emerging drugs of abuse can be performed whilst conducting routine drug of abuse testing in patients. The method was successfully applied to 2000 samples with three emerging drugs of abuse detected in five samples: PMMA (paramethoxymethamphetamine), TFMPP [1-(3-trifluoromethylphenyl)piperazine], and methcathinone. The method also detected conventional drugs of abuse, with codeine, methadone, heroin, methamphetamine, and ketamine being the most frequently detected drugs. Other findings included the observation that South Asians had significantly higher rates of using opiates such as heroin, methadone, and codeine; and that ketamine and cocaine had significantly higher detection rates in acute subjects compared with the rehabilitation population. This locally developed analytical method is a valid tool for simultaneous surveillance of emerging drugs of abuse and routine drug monitoring of patients at minimal additional cost and effort. Continued, proactive surveillance and early identification of emerging drugs will facilitate prompt clinical, social, and legislative management.

  13. Development and validation of HPLC analytical method for quantitative determination of metronidazole in human plasma

    International Nuclear Information System (INIS)

    Safdar, K.A.; Shyum, S.B.; Usman, S.

    2016-01-01

    The objective of the present study was to develop a simple, rapid and sensitive reversed-phase high performance liquid chromatographic (RP-HPLC) analytical method with UV detection system for the quantitative determination of metronidazole in human plasma. The chromatographic separation was performed by using C18 RP column (250mm X 4.6mm, 5 meu m) as stationary phase and 0.01M potassium dihydrogen phosphate buffered at pH 3.0 and acetonitrile (83:17, v/v) as mobile phase at flow rate of 1.0 ml/min. The UV detection was carried out at 320nm. The method was validated as per the US FDA guideline for bioanalytical method validation and was found to be selective without interferences from mobile phase components, impurities and biological matrix. The method found to be linear over the concentration range of 0.2812 meu g/ml to 18.0 meu g/ml (r2 = 0.9987) with adequate level of accuracy and precision. The samples were found to be stable under various recommended laboratory and storage conditions. Therefore, the method can be used with adequate level of confidence and assurance for bioavailability, bioequivalence and other pharmacokinetic studies of metronidazole in human. (author)

  14. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  15. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  16. Analytical models for the rewetting of hot surfaces

    International Nuclear Information System (INIS)

    Olek, S.

    1988-10-01

    Some aspects concerning analytical models for the rewetting of hot surface are discussed. These include the problems with applying various forms of boundary conditions, compatibility of boundary conditions with the physics of the rewetting problems, recent analytical models, the use of the separation of variables method versus the Wiener-Hopf technique, and the use of transformations. The report includes an updated list of rewetting models as well as benchmark solutions in tabular form for several models. It should be emphasized that this report is not meant to cover the topic of rewetting models. It merely discusses some points which are less commonly referred to in the literature. 93 refs., 3 figs., 22 tabs

  17. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  18. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  19. An analytical model for the assessment of airline expansion strategies

    Directory of Open Access Journals (Sweden)

    Mauricio Emboaba Moreira

    2014-01-01

    Full Text Available Purpose: The purpose of this article is to develop an analytical model to assess airline expansion strategies by combining generic business strategy models with airline business models. Methodology and approach: A number of airline business models are examined, as are Porter’s (1983 industry five forces that drive competition, complemented by Nalebuff/ Brandenburger’s  (1996 sixth force, and the basic elements of the general environment in which the expansion process takes place.  A system of points and weights is developed to create a score among the 904,736 possible combinations considered. The model’s outputs are generic expansion strategies with quantitative assessments for each specific combination of elements inputted. Originality and value: The analytical model developed is original because it combines for the first time and explicitly elements of the general environment, industry environment, airline business models and the generic expansion strategy types. Besides it creates a system of scores that may be used to drive the decision process toward the choice of a specific strategic expansion path. Research implications: The analytical model may be adapted to other industries apart from the airline industry by substituting the element “airline business model” by other industries corresponding elements related to the different specific business models.

  20. Bubbles in inkjet printheads: analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  1. Bubbles in inkjet printheads : analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  2. Models for the analytic estimation of low energy photon albedo

    International Nuclear Information System (INIS)

    Simovic, R.; Markovic, S.; Ljubenov, V.

    2005-01-01

    This paper shows some monoenergetic models for estimation of photon reflection in the energy range from 20 keV to 80 keV. Using the DP0 approximation of the H-function we have derived the analytic expressions of the η and R functions in purpose to facilitate photon reflection analyses as well as the radiation shield designee. (author) [sr

  3. An analytical excitation model for an ionizing plasma

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.; Sijde, van der B.; Schram, D.C.

    1983-01-01

    From an analytical model for the population of high-lying excited levels in ionizing plasmas it appears that the distribution is a superposition of the equilibrium (Saha) value and an overpopulation. This overpopulation takes the form of a Maxwell distribution for free electrons. Experiments for He

  4. MODEL ANALYTICAL NETWORK PROCESS (ANP DALAM PENGEMBANGAN PARIWISATA DI JEMBER

    Directory of Open Access Journals (Sweden)

    Sukidin Sukidin

    2015-04-01

    Full Text Available Abstrak    : Model Analytical Network Process (ANP dalam Pengembangan Pariwisata di Jember. Penelitian ini mengkaji kebijakan pengembangan pariwisata di Jember, terutama kebijakan pengembangan agrowisata perkebunan kopi dengan menggunakan Jember Fashion Carnival (JFC sebagai event marketing. Metode yang digunakan adalah soft system methodology dengan menggunakan metode analitis jaringan (Analytical Network Process. Penelitian ini menemukan bahwa pengembangan pariwisata di Jember masih dilakukan dengan menggunakan pendekatan konvensional, belum terkoordinasi dengan baik, dan lebih mengandalkan satu even (atraksi pariwisata, yakni JFC, sebagai lokomotif daya tarik pariwisata Jember. Model pengembangan konvensional ini perlu dirancang kembali untuk memperoleh pariwisata Jember yang berkesinambungan. Kata kunci: pergeseran paradigma, industry pariwisata, even pariwisata, agrowisata Abstract: Analytical Network Process (ANP Model in the Tourism Development in Jember. The purpose of this study is to conduct a review of the policy of tourism development in Jember, especially development policies for coffee plantation agro-tourism by using Jember Fashion Carnival (JFC as event marketing. The research method used is soft system methodology using Analytical Network Process. The result shows that the tourism development in Jember is done using a conventional approach, lack of coordination, and merely focus on a single event tourism, i.e. the JFC, as locomotive tourism attraction in Jember. This conventional development model needs to be redesigned to reach Jember sustainable tourism development. Keywords: paradigm shift, tourism industry, agro-tourism

  5. Foam for Enhanced Oil Recovery : Modeling and Analytical Solutions

    NARCIS (Netherlands)

    Ashoori, E.

    2012-01-01

    Foam increases sweep in miscible- and immiscible-gas enhanced oil recovery by decreasing the mobility of gas enormously. This thesis is concerned with the simulations and analytical solutions for foam flow for the purpose of modeling foam EOR in a reservoir. For the ultimate goal of upscaling our

  6. Learning, Learning Analytics, Activity Visualisation and Open learner Model

    DEFF Research Database (Denmark)

    Bull, Susan; Kickmeier-Rust, Michael; Vatrapu, Ravi

    2013-01-01

    This paper draws on visualisation approaches in learning analytics, considering how classroom visualisations can come together in practice. We suggest an open learner model in situations where many tools and activity visualisations produce more visual information than can be readily interpreted....

  7. Evaluating Modeling Sessions Using the Analytic Hierarchy Process

    NARCIS (Netherlands)

    Ssebuggwawo, D.; Hoppenbrouwers, S.J.B.A.; Proper, H.A.; Persson, A.; Stirna, J.

    2008-01-01

    In this paper, which is methodological in nature, we propose to use an established method from the field of Operations Research, the Analytic Hierarchy Process (AHP), in the integrated, stakeholder- oriented evaluation of enterprise modeling sessions: their language, pro- cess, tool (medium), and

  8. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  9. Analytical model spectrum for electrostatic turbulence in tokamaks

    International Nuclear Information System (INIS)

    Fiedler-Ferrari, N.; Misguich, J.H.

    1990-04-01

    In this work we present an analytical model spectrum, for three-dimensional electrostatic turbulence (homogeneous, stationary and locally isotropic in the plane perpendicular to the magnetic field), constructed by using experimental results from TFR and TEXT Tokamaks, and satisfying basic symmetry and parity conditions. The proposed spectrum seems to be tractable for explicit analytical calculations of transport processes, and consistent with experimental data. Additional experimental measurements in the bulk plasma remain however necessary in order to determine some unknown spectral properties of parallel propagation

  10. The Greenhouse effect within an analytic model of the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Dehnen, Heinz [Konstanz Univ. (Germany). Fachbereich Physik

    2009-01-15

    Within a simplified atmospheric model the greenhouse effect is treated by analytical methods starting from physical first principles. The influence of solar radiation, absorption cross sections of the greenhouse molecules, and cloud formation on the earth's temperature is shown and discussed explicitly by mathematical formulae in contrast to the climate simulations. The application of our analytical results on the production of 20 .10{sup 9} t of CO{sub 2} per year yields an enlargement of the earth's surface temperature of 2.3 .10{sup -2} C per year in agreement with other estimations. (orig.)

  11. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  12. Cellular Scanning Strategy for Selective Laser Melting: Capturing Thermal Trends with a Low-Fidelity, Pseudo-Analytical Model

    Directory of Open Access Journals (Sweden)

    Sankhya Mohanty

    2014-01-01

    Full Text Available Simulations of additive manufacturing processes are known to be computationally expensive. The resulting large runtimes prohibit their application in secondary analysis requiring several complete simulations such as optimization studies, and sensitivity analysis. In this paper, a low-fidelity pseudo-analytical model has been introduced to enable such secondary analysis. The model has been able to mimic a finite element model and was able to capture the thermal trends associated with the process. The model has been validated and subsequently applied in a small optimization case study. The pseudo-analytical modelling technique is established as a fast tool for primary modelling investigations.

  13. An analytically solvable model for rapid evolution of modular structure.

    Directory of Open Access Journals (Sweden)

    Nadav Kashtan

    2009-04-01

    Full Text Available Biological systems often display modularity, in the sense that they can be decomposed into nearly independent subsystems. Recent studies have suggested that modular structure can spontaneously emerge if goals (environments change over time, such that each new goal shares the same set of sub-problems with previous goals. Such modularly varying goals can also dramatically speed up evolution, relative to evolution under a constant goal. These studies were based on simulations of model systems, such as logic circuits and RNA structure, which are generally not easy to treat analytically. We present, here, a simple model for evolution under modularly varying goals that can be solved analytically. This model helps to understand some of the fundamental mechanisms that lead to rapid emergence of modular structure under modularly varying goals. In particular, the model suggests a mechanism for the dramatic speedup in evolution observed under such temporally varying goals.

  14. Experimental evaluation of analytical penumbra calculation model for wobbled beams

    International Nuclear Information System (INIS)

    Kohno, Ryosuke; Kanematsu, Nobuyuki; Yusa, Ken; Kanai, Tatsuaki

    2004-01-01

    The goal of radiotherapy is not only to apply a high radiation dose to a tumor, but also to avoid side effects in the surrounding healthy tissue. Therefore, it is important for carbon-ion treatment planning to calculate accurately the effects of the lateral penumbra. In this article, for wobbled beams under various irradiation conditions, we focus on the lateral penumbras at several aperture positions of one side leaf of the multileaf collimator. The penumbras predicted by an analytical penumbra calculation model were compared with the measured results. The results calculated by the model for various conditions agreed well with the experimental ones. In conclusion, we found that the analytical penumbra calculation model could predict accurately the measured results for wobbled beams and it was useful for carbon-ion treatment planning to apply the model

  15. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  16. Collaborative data analytics for smart buildings: opportunities and models

    DEFF Research Database (Denmark)

    Lazarova-Molnar, Sanja; Mohamed, Nader

    2018-01-01

    of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis...... of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.......Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate...

  17. Analytical model for nonlinear piezoelectric energy harvesting devices

    International Nuclear Information System (INIS)

    Neiss, S; Goldschmidtboeing, F; M Kroener; Woias, P

    2014-01-01

    In this work we propose analytical expressions for the jump-up and jump-down point of a nonlinear piezoelectric energy harvester. In addition, analytical expressions for the maximum power output at optimal resistive load and the 3 dB-bandwidth are derived. So far, only numerical models have been used to describe the physics of a piezoelectric energy harvester. However, this approach is not suitable to quickly evaluate different geometrical designs or piezoelectric materials in the harvester design process. In addition, the analytical expressions could be used to predict the jump-frequencies of a harvester during operation. In combination with a tuning mechanism, this would allow the design of an efficient control algorithm to ensure that the harvester is always working on the oscillator's high energy attractor. (paper)

  18. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    Science.gov (United States)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  19. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  20. Analytic solution of the Starobinsky model for inflation

    Energy Technology Data Exchange (ETDEWEB)

    Paliathanasis, Andronikos [Universidad Austral de Chile, Instituto de Ciencias Fisicas y Matematicas, Valdivia (Chile); Durban University of Technology, Institute of Systems Science, Durban (South Africa)

    2017-07-15

    We prove that the field equations of the Starobinsky model for inflation in a Friedmann-Lemaitre-Robertson-Walker metric constitute an integrable system. The analytical solution in terms of a Painleve series for the Starobinsky model is presented for the case of zero and nonzero spatial curvature. In both cases the leading-order term describes the radiation era provided by the corresponding higher-order theory. (orig.)

  1. Analytic models for the evolution of semilocal string networks

    International Nuclear Information System (INIS)

    Nunes, A. S.; Martins, C. J. A. P.; Avgoustidis, A.; Urrestilla, J.

    2011-01-01

    We revisit previously developed analytic models for defect evolution and adapt them appropriately for the study of semilocal string networks. We thus confirm the expectation (based on numerical simulations) that linear scaling evolution is the attractor solution for a broad range of model parameters. We discuss in detail the evolution of individual semilocal segments, focusing on the phenomenology of segment growth, and also provide a preliminary comparison with existing numerical simulations.

  2. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  3. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    Science.gov (United States)

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or

  4. Unified analytical threshold voltage model for non-uniformly doped dual metal gate fully depleted silicon-on-insulator MOSFETs

    Science.gov (United States)

    Rao, Rathnamala; Katti, Guruprasad; Havaldar, Dnyanesh S.; DasGupta, Nandita; DasGupta, Amitava

    2009-03-01

    The paper describes the unified analytical threshold voltage model for non-uniformly doped, dual metal gate (DMG) fully depleted silicon-on-insulator (FDSOI) MOSFETs based on the solution of 2D Poisson's equation. 2D Poisson's equation is solved analytically for appropriate boundary conditions using separation of variables technique. The solution is then extended to obtain the threshold voltage of the FDSOI MOSFET. The model is able to handle any kind of non-uniform doping, viz. vertical, lateral as well as laterally asymetric channel (LAC) profile in the SOI film in addition to the DMG structure. The analytical results are validated with the numerical simulations using the device simulator MEDICI.

  5. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  6. Validation of an analytical method for the determination of the sodium content in foods

    International Nuclear Information System (INIS)

    Valverde Montero, Ericka; Silva Trejos, Paulina

    2012-01-01

    The analytical methodology for quantitative determination of sodium in foods by flame atomic absorption spectrometry was validated. The samples of 0,5 g was realized by microwave oven with 5,0 mL of nitric acid (HNO 3 ) to 65% by mass. The linearity range has been from 0,043 mg/L to 0,70 mg/L with a correlation coefficient equal to 0,998. The detection and quantification limits have reported 0,025 mg/L and 0,043 mg/L, respectively; with 0,805 Lmg -1 of calibration sensitivity and 44 Lmg -1 of analytical sensitivity. The precision was evaluated in terms of repeatability and have obtained a value equal to 2,9% RDS r . The trueness was determined using three NIST ® , certified standards SRM 1846 Infant Formula with a reported value for sodium of (2310 ± 130) mg/kg, SRM 8414 Bovine Muscle Powder with a reported value for sodium of (0,210 ± 0,008)% and SRM 8415 Whole Egg Powder with a reported value for sodium of (0,377 ± 0,034)% by mass. The bias have obtained an average between(-0,010 to 0,009) mg/L. From the list of foods that were selected for the study, for example, whole milk powder, white wheat bread, fresh cheese and mozzarella cheese have submitted highest content in sodium concentrations, ranging from (106 to 452) mg Na /100g. (author) [es

  7. Analytic model for ultrasound energy receivers and their optimal electric loads

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  8. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  9. Analytical heat transfer modeling of a new radiation calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Obame Ndong, Elysée [Department of Industrial Engineering and Maintenance, University of Sciences and Technology of Masuku (USTM), BP 941 Franceville (Gabon); Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Gallot-Lavallée, Olivier [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Aitken, Frédéric, E-mail: frederic.aitken@g2elab.grenoble-inp.fr [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France)

    2016-06-10

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  10. Analytical heat transfer modeling of a new radiation calorimeter

    International Nuclear Information System (INIS)

    Obame Ndong, Elysée; Gallot-Lavallée, Olivier; Aitken, Frédéric

    2016-01-01

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  11. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G

    2018-01-01

    for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision...... are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation...

  12. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    Science.gov (United States)

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  13. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  14. Analytical Validation of Accelerator Mass Spectrometry for Pharmaceutical Development: the Measurement of Carbon-14 Isotope Ratio

    International Nuclear Information System (INIS)

    Keck, B.D.; Ognibene, T.; Vogel, J.S.

    2010-01-01

    Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of 14 C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of any separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of 14 C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the 14 C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with 14 C corresponds to 30 fg equivalents. AMS

  15. Analytical Validation of Accelerator Mass Spectrometry for Pharmaceutical Development: the Measurement of Carbon-14 Isotope Ratio.

    Energy Technology Data Exchange (ETDEWEB)

    Keck, B D; Ognibene, T; Vogel, J S

    2010-02-05

    Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of any separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg

  16. Using Learning Analytics to Understand Scientific Modeling in the Classroom

    Directory of Open Access Journals (Sweden)

    David Quigley

    2017-11-01

    Full Text Available Scientific models represent ideas, processes, and phenomena by describing important components, characteristics, and interactions. Models are constructed across various scientific disciplines, such as the food web in biology, the water cycle in Earth science, or the structure of the solar system in astronomy. Models are central for scientists to understand phenomena, construct explanations, and communicate theories. Constructing and using models to explain scientific phenomena is also an essential practice in contemporary science classrooms. Our research explores new techniques for understanding scientific modeling and engagement with modeling practices. We work with students in secondary biology classrooms as they use a web-based software tool—EcoSurvey—to characterize organisms and their interrelationships found in their local ecosystem. We use learning analytics and machine learning techniques to answer the following questions: (1 How can we automatically measure the extent to which students’ scientific models support complete explanations of phenomena? (2 How does the design of student modeling tools influence the complexity and completeness of students’ models? (3 How do clickstreams reflect and differentiate student engagement with modeling practices? We analyzed EcoSurvey usage data collected from two different deployments with over 1,000 secondary students across a large urban school district. We observe large variations in the completeness and complexity of student models, and large variations in their iterative refinement processes. These differences reveal that certain key model features are highly predictive of other aspects of the model. We also observe large differences in student modeling practices across different classrooms and teachers. We can predict a student’s teacher based on the observed modeling practices with a high degree of accuracy without significant tuning of the predictive model. These results highlight

  17. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  18. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  19. An Analytical Tire Model with Flexible Carcass for Combined Slips

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2014-01-01

    Full Text Available The tire mechanical characteristics under combined cornering and braking/driving situations have significant effects on vehicle directional controls. The objective of this paper is to present an analytical tire model with flexible carcass for combined slip situations, which can describe tire behavior well and can also be used for studying vehicle dynamics. The tire forces and moments come mainly from the shear stress and sliding friction at the tread-road interface. In order to describe complicated tire characteristics and tire-road friction, some key factors are considered in this model: arbitrary pressure distribution; translational, bending, and twisting compliance of the carcass; dynamic friction coefficient; anisotropic stiffness properties. The analytical tire model can describe tire forces and moments accurately under combined slip conditions. Some important properties induced by flexible carcass can also be reflected. The structural parameters of a tire can be identified from tire measurements and the computational results using the analytical model show good agreement with test data.

  20. Development and validation of analytical method for Naftopidil in human plasma by LC–MS/MS

    Directory of Open Access Journals (Sweden)

    Pritam S. Jain

    2015-09-01

    Full Text Available A highly sensitive and simple high-performance liquid chromatographic–tandem mass spectrometric (LC–MS-MS assay is developed and validated for the quantification of Naftopidil in human plasma. Naftopidil is extracted from human plasma by methyl tertiary butyl ether and analyzed using a reversed-phase gradient elution on a discovery C 18 5 μ (50 × 4.6 column. A methanol: 2 mM ammonium formate (90:10 as mobile phase, is used and detection was performed by MS using electrospray ionization in positive mode. Propranolol is used as the internal standard. The lower limits of quantification are 0.495 ng/mL. The calibration curves are linear over the concentration range of 0.495–200.577 ng/mL of plasma for each analyte. This novel LC–MS-MS method shows satisfactory accuracy and precision and is sufficiently sensitive for the performance of pharmacokinetic studies in humans.

  1. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  2. ANALYTICAL AND SIMULATION PLANNING MODEL OF URBAN PASSENGER TRANSPORT

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article described the structure of the analytical and simulation models to make informed decisions in the planning of urban passenger transport. Designed UML diagram that describes the relationship of classes of the proposed model. A description of the main agents of the model developed in the simulation AnyLogic. Designed user interface integration with GIS map. Also provides simulation results that allow concluding about her health and the possibility of its use in solving planning problems of urban passenger transport.

  3. Analytical and finite element modeling of grounding systems

    Energy Technology Data Exchange (ETDEWEB)

    Luz, Mauricio Valencia Ferreira da [University of Santa Catarina (UFSC), Florianopolis, SC (Brazil)], E-mail: mauricio@grucad.ufsc.br; Dular, Patrick [University of Liege (Belgium). Institut Montefiore], E-mail: Patrick.Dular@ulg.ac.be

    2007-07-01

    Grounding is the art of making an electrical connection to the earth. This paper deals with the analytical and finite element modeling of grounding systems. An electrokinetic formulation using a scalar potential can benefit from floating potentials to define global quantities such as electric voltages and currents. The application concerns a single vertical grounding with one, two and three-layer soil, where the superior extremity stays in the surface of the soil. This problem has been modeled using a 2D axi-symmetric electrokinetic formulation. The grounding resistance obtained by finite element method is compared with the analytical one for one-layer soil. With the results of this paper it is possible to show that finite element method is a powerful tool in the analysis of the grounding systems in low frequencies. (author)

  4. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  5. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  6. Two dimensional analytical model for a reconfigurable field effect transistor

    Science.gov (United States)

    Ranjith, R.; Jayachandran, Remya; Suja, K. J.; Komaragiri, Rama S.

    2018-02-01

    This paper presents two-dimensional potential and current models for a reconfigurable field effect transistor (RFET). Two potential models which describe subthreshold and above-threshold channel potentials are developed by solving two-dimensional (2D) Poisson's equation. In the first potential model, 2D Poisson's equation is solved by considering constant/zero charge density in the channel region of the device to get the subthreshold potential characteristics. In the second model, accumulation charge density is considered to get above-threshold potential characteristics of the device. The proposed models are applicable for the device having lightly doped or intrinsic channel. While obtaining the mathematical model, whole body area is divided into two regions: gated region and un-gated region. The analytical models are compared with technology computer-aided design (TCAD) simulation results and are in complete agreement for different lengths of the gated regions as well as at various supply voltage levels.

  7. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    International Nuclear Information System (INIS)

    Pérez-Andújar, Angélica; Zhang, Rui; Newhauser, Wayne

    2013-01-01

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w R , as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w R was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w R which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis

  8. Analytical theory of Doppler reflectometry in slab plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Gusakov, E.Z.; Surkov, A.V. [Ioffe Institute, Politekhnicheskaya 26, St. Petersburg (Russian Federation)

    2004-07-01

    Doppler reflectometry is considered in slab plasma model in the frameworks of analytical theory. The diagnostics locality is analyzed for both regimes: linear and nonlinear in turbulence amplitude. The toroidal antenna focusing of probing beam to the cut-off is proposed and discussed as a method to increase diagnostics spatial resolution. It is shown that even in the case of nonlinear regime of multiple scattering, the diagnostics can be used for an estimation (with certain accuracy) of plasma poloidal rotation profile. (authors)

  9. New analytically solvable models of relativistic point interactions

    International Nuclear Information System (INIS)

    Gesztesy, F.; Seba, P.

    1987-01-01

    Two new analytically solvable models of relativistic point interactions in one dimension (being natural extensions of the nonrelativistic δ-resp, δ'-interaction) are considered. Their spectral properties in the case of finitely many point interactions as well as in the periodic case are fully analyzed. Moreover the spectrum is explicitely determined in the case of independent, identically distributed random coupling constants and the analog of the Saxon and Huther conjecture concerning gaps in the energy spectrum of such systems is derived

  10. A simple analytical infiltration model for short-duration rainfall

    Science.gov (United States)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  11. The analytical calibration model of temperature effects on a silicon piezoresistive pressure sensor

    Directory of Open Access Journals (Sweden)

    Meng Nie

    2017-03-01

    Full Text Available Presently, piezoresistive pressure sensors are highly demanded for using in various microelectronic devices. The electrical behavior of these pressure sensor is mainly dependent on the temperature gradient. In this paper, various factors,which includes effect of temperature, doping concentration on the pressure sensitive resistance, package stress, and temperature on the Young’s modulus etc., are responsible for the temperature drift of the pressure sensor are analyzed. Based on the above analysis, an analytical calibration model of the output voltage of the sensor is proposed and the experimental data is validated through a suitable model.

  12. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A.

    2008-12-17

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses on validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.

  13. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  14. Challenges in the development of analytical soil compaction models

    DEFF Research Database (Denmark)

    Keller, Thomas; Lamandé, Mathieu

    2010-01-01

    and recommendations for the prevention of soil compaction often rely on simulation models. This paper highlights some issues that need further consideration in order to improve soil compaction modelling, with the focus on analytical models. We discuss the different issues based on comparisons between experimental......Soil compaction can cause a number of environmental and agronomic problems (e.g. flooding, erosion, leaching of agrochemicals to recipient waters, emission of greenhouse gases to the atmosphere, crop yield losses), resulting in significant economic damage to society and agriculture. Strategies...... data and model simulations. The upper model boundary condition (i.e. contact area and stresses at the tyre-soil interface) is highly influential in stress propagation, but knowledge on the effects of loading and soil conditions on the upper model boundary condition is inadequate. The accuracy of stress...

  15. A novel stress distribution analytical model of O-ring seals under different properties of materials

    International Nuclear Information System (INIS)

    Wu, Di; Wang, Shao Ping; Wang, Xing Jian

    2017-01-01

    The elastomeric O-ring seals have been widely used as sealing elements in hydraulic systems. The sealing performance of O-ring seals is related to stress distribution. The stresses distribution depends on the squeeze rate and internal pressure, and would vary with properties of O-ring seals materials. Thus, in order to study the sealing performance of O-ring seals, it is necessary to describe the analytic relationship between stress distribution and properties of O-ring seals materials. For this purpose, a novel Stress distribution analytical model (SDAM) is proposed in this paper. The analytical model utilizes two stress complex functions to describe the stress distribution of O-ring seals. The proposed SDAM can express not only the analytical relationship between stress distribution and Young’s modulus, but also the one between stress distribution and Poisson’s ratio. Finally, compared results between finite element analysis and the SDAM validate that the proposed model can effectively reveal the stress distribution under different properties for O-ring materials

  16. A novel stress distribution analytical model of O-ring seals under different properties of materials

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Di; Wang, Shao Ping; Wang, Xing Jian [School of Automation Science and Electrical Engineering, Beihang University, Beijing (China)

    2017-01-15

    The elastomeric O-ring seals have been widely used as sealing elements in hydraulic systems. The sealing performance of O-ring seals is related to stress distribution. The stresses distribution depends on the squeeze rate and internal pressure, and would vary with properties of O-ring seals materials. Thus, in order to study the sealing performance of O-ring seals, it is necessary to describe the analytic relationship between stress distribution and properties of O-ring seals materials. For this purpose, a novel Stress distribution analytical model (SDAM) is proposed in this paper. The analytical model utilizes two stress complex functions to describe the stress distribution of O-ring seals. The proposed SDAM can express not only the analytical relationship between stress distribution and Young’s modulus, but also the one between stress distribution and Poisson’s ratio. Finally, compared results between finite element analysis and the SDAM validate that the proposed model can effectively reveal the stress distribution under different properties for O-ring materials.

  17. Analytical modeling of Schottky tunneling source impact ionization MOSFET with reduced breakdown voltage

    Directory of Open Access Journals (Sweden)

    Sangeeta Singh

    2016-03-01

    Full Text Available In this paper, we have investigated a novel Schottky tunneling source impact ionization MOSFET (STS-IMOS to lower the breakdown voltage of conventional impact ionization MOS (IMOS and developed an analytical model for the same. In STS-IMOS there is an accumulative effect of both impact ionization and source induced barrier tunneling. The silicide source offers very low parasitic resistance, the outcome of which is an increment in voltage drop across the intrinsic region for the same applied bias. This reduces operating voltage and hence, it exhibits a significant reduction in both breakdown and threshold voltage. STS-IMOS shows high immunity against hot electron damage. As a result of this the device reliability increases magnificently. The analytical model for impact ionization current (Iii is developed based on the integration of ionization integral (M. Similarly, to get Schottky tunneling current (ITun expression, Wentzel–Kramers–Brillouin (WKB approximation is employed. Analytical models for threshold voltage and subthreshold slope is optimized against Schottky barrier height (ϕB variation. The expression for the drain current is computed as a function of gate-to-drain bias via integral expression. It is validated by comparing it with the technology computer-aided design (TCAD simulation results as well. In essence, this analytical framework provides the physical background for better understanding of STS-IMOS and its performance estimation.

  18. MOLECULAR VALIDATED MODEL FOR ADSORPTION OF PROTONATED DYE ON LDH

    Directory of Open Access Journals (Sweden)

    B. M. Braga

    Full Text Available Abstract Hydrotalcite-like compounds are anionic clays of scientific and technological interest for their use as ion exchange materials, catalysts and modified electrodes. Surface phenomenon are important for all these applications. Although conventional analytical methods have enabled progress in understanding the behavior of anionic clays in solution, an evaluation at the atomic scale of the dynamics of their ionic interactions has never been performed. Molecular simulation has become an extremely useful tool to provide this perspective. Our purpose is to validate a simplified model for the adsorption of 5-benzoyl-4-hydroxy-2-methoxy-benzenesulfonic acid (MBSA, a prototype molecule of anionic dyes, onto a hydrotalcite surface. Monte Carlo simulations were performed in the canonical ensemble with MBSA ions and a pore model of hydrotalcite using UFF and ClayFF force fields. The proposed molecular model has allowed us to reproduce experimental data of atomic force microscopy. Influences of protonation during the adsorption process are also presented.

  19. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  20. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    Science.gov (United States)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  1. Analytical Modeling Of The Steinmetz Coefficient For Single-Phase Transformer Eddy Current Loss Prediction

    Directory of Open Access Journals (Sweden)

    T. Aly Saandy

    2015-08-01

    Full Text Available Abstract This article presents to an analytical calculation methodology of the Steinmetz coefficient applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active power consumed by the core is expressed analytically in function of the electrical parameters as resistivity and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The required coefficient is identified from the empirical Steinmetz data based on the experimented active power expression. To verify the relevance of the model validations both by simulations with two in two different frequencies and measurements were carried out. The obtained results are in good agreement with the theoretical approach and the practical results.

  2. Paleoclimate validation of a numerical climate model

    International Nuclear Information System (INIS)

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-01-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE's Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented

  3. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  4. Analytical properties of a three-compartmental dynamical demographic model

    Science.gov (United States)

    Postnikov, E. B.

    2015-07-01

    The three-compartmental demographic model by Korotaeyv-Malkov-Khaltourina, connecting population size, economic surplus, and education level, is considered from the point of view of dynamical systems theory. It is shown that there exist two integrals of motion, which enables the system to be reduced to one nonlinear ordinary differential equation. The study of its structure provides analytical criteria for the dominance ranges of the dynamics of Malthus and Kremer. Additionally, the particular ranges of parameters enable the derived general ordinary differential equations to be reduced to the models of Gompertz and Thoularis-Wallace.

  5. Model and Analytic Processes for Export License Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.; Wood, Thomas W.; Daly, Don S.; Brothers, Alan J.; Sanfilippo, Antonio P.; Cook, Diane; Holder, Larry

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

  6. Gas Atomization of Aluminium Melts: Comparison of Analytical Models

    Directory of Open Access Journals (Sweden)

    Georgios Antipas

    2012-06-01

    Full Text Available A number of analytical models predicting the size distribution of particles during atomization of Al-based alloys by N2, He and Ar gases were compared. Simulations of liquid break up in a close coupled atomizer revealed that the finer particles are located near the center of the spray cone. Increasing gas injection pressures led to an overall reduction of particle diameters and caused a migration of the larger powder particles towards the outer boundary of the flow. At sufficiently high gas pressures the spray became monodisperse. The models also indicated that there is a minimum achievable mean diameter for any melt/gas system.

  7. A simple analytical model for reactive particle ignition in explosives

    Energy Technology Data Exchange (ETDEWEB)

    Tanguay, Vincent [Defence Research and Development Canada - Valcartier, 2459 Pie XI Blvd. North, Quebec, QC, G3J 1X5 (Canada); Higgins, Andrew J. [Department of Mechanical Engineering, McGill University, 817 Sherbrooke St. West, Montreal, QC, H3A 2K6 (Canada); Zhang, Fan [Defence Research and Development Canada - Suffield, P. O. Box 4000, Stn Main, Medicine Hat, AB, T1A 8K6 (Canada)

    2007-10-15

    A simple analytical model is developed to predict ignition of magnesium particles in nitromethane detonation products. The flow field is simplified by considering the detonation products as a perfect gas expanding in a vacuum in a planar geometry. This simplification allows the flow field to be solved analytically. A single particle is then introduced in this flow field. Its trajectory and heating history are computed. It is found that most of the particle heating occurs in the Taylor wave and in the quiescent flow region behind it, shortly after which the particle cools. By considering only these regions, thereby considerably simplifying the problem, the flow field can be solved analytically with a more realistic equation of state (such as JWL) and a spherical geometry. The model is used to compute the minimum charge diameter for particle ignition to occur. It is found that the critical charge diameter for particle ignition increases with particle size. These results are compared to experimental data and show good agreement. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  8. HTS axial flux induction motor with analytic and FEA modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: alexlee.zn@gmail.com; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-11-15

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested.

  9. HTS axial flux induction motor with analytic and FEA modeling

    International Nuclear Information System (INIS)

    Li, S.; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-01-01

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested

  10. Analytical local electron-electron interaction model potentials for atoms

    International Nuclear Information System (INIS)

    Neugebauer, Johannes; Reiher, Markus; Hinze, Juergen

    2002-01-01

    Analytical local potentials for modeling the electron-electron interaction in an atom reduce significantly the computational effort in electronic structure calculations. The development of such potentials has a long history, but some promising ideas have not yet been taken into account for further improvements. We determine a local electron-electron interaction potential akin to those suggested by Green et al. [Phys. Rev. 184, 1 (1969)], which are widely used in atom-ion scattering calculations, electron-capture processes, and electronic structure calculations. Generalized Yukawa-type model potentials are introduced. This leads, however, to shell-dependent local potentials, because the origin behavior of such potentials is different for different shells as has been explicated analytically [J. Neugebauer, M. Reiher, and J. Hinze, Phys. Rev. A 65, 032518 (2002)]. It is found that the parameters that characterize these local potentials can be interpolated and extrapolated reliably for different nuclear charges and different numbers of electrons. The analytical behavior of the corresponding localized Hartree-Fock potentials at the origin and at long distances is utilized in order to reduce the number of fit parameters. It turns out that the shell-dependent form of Green's potential, which we also derive, yields results of comparable accuracy using only one shell-dependent parameter

  11. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  12. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    Science.gov (United States)

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  13. Analytical models of optical response in one-dimensional semiconductors

    International Nuclear Information System (INIS)

    Pedersen, Thomas Garm

    2015-01-01

    The quantum mechanical description of the optical properties of crystalline materials typically requires extensive numerical computation. Including excitonic and non-perturbative field effects adds to the complexity. In one dimension, however, the analysis simplifies and optical spectra can be computed exactly. In this paper, we apply the Wannier exciton formalism to derive analytical expressions for the optical response in four cases of increasing complexity. Thus, we start from free carriers and, in turn, switch on electrostatic fields and electron–hole attraction and, finally, analyze the combined influence of these effects. In addition, the optical response of impurity-localized excitons is discussed. - Highlights: • Optical response of one-dimensional semiconductors including excitons. • Analytical model of excitonic Franz–Keldysh effect. • Computation of optical response of impurity-localized excitons

  14. Decentral gene expression analysis: analytical validation of the Endopredict genomic multianalyte breast cancer prognosis test

    Directory of Open Access Journals (Sweden)

    Kronenwett Ralf

    2012-10-01

    Full Text Available Abstract Background EndoPredict (EP is a clinically validated multianalyte gene expression test to predict distant metastasis in ER-positive, HER2-negative breast cancer treated with endocrine therapy alone. The test is based on the combined analysis of 12 genes in formalin-fixed, paraffin-embedded (FFPE tissue by reverse transcription-quantitative real-time PCR (RT-qPCR. Recently, it was shown that EP is feasible for reliable decentralized assessment of gene expression. The aim of this study was the analytical validation of the performance characteristics of the assay and its verification in a molecular-pathological routine laboratory. Methods Gene expression values to calculate the EP score were assayed by one-step RT-qPCR using RNA from FFPE tumor tissue. Limit of blank, limit of detection, linear range, and PCR efficiency were assessed for each of the 12 PCR assays using serial samples dilutions. Different breast cancer samples were used to evaluate RNA input range, precision and inter-laboratory variability. Results PCR assays were linear up to Cq values between 35.1 and 37.2. Amplification efficiencies ranged from 75% to 101%. The RNA input range without considerable change of the EP score was between 0.16 and 18.5 ng/μl. Analysis of precision (variation of day, day time, instrument, operator, reagent lots resulted in a total noise (standard deviation of 0.16 EP score units on a scale from 0 to 15. The major part of the total noise (SD 0.14 was caused by the replicate-to-replicate noise of the PCR assays (repeatability and was not associated with different operating conditions (reproducibility. Performance characteristics established in the manufacturer’s laboratory were verified in a routine molecular pathology laboratory. Comparison of 10 tumor samples analyzed in two different laboratories showed a Pearson coefficient of 0.995 and a mean deviation of 0.15 score units. Conclusions The EP test showed reproducible performance

  15. Decentral gene expression analysis: analytical validation of the Endopredict genomic multianalyte breast cancer prognosis test

    International Nuclear Information System (INIS)

    Kronenwett, Ralf; Brase, Jan C; Weber, Karsten E; Fisch, Karin; Müller, Berit M; Schmidt, Marcus; Filipits, Martin; Dubsky, Peter; Petry, Christoph; Dietel, Manfred; Denkert, Carsten; Bohmann, Kerstin; Prinzler, Judith; Sinn, Bruno V; Haufe, Franziska; Roth, Claudia; Averdick, Manuela; Ropers, Tanja; Windbergs, Claudia

    2012-01-01

    EndoPredict (EP) is a clinically validated multianalyte gene expression test to predict distant metastasis in ER-positive, HER2-negative breast cancer treated with endocrine therapy alone. The test is based on the combined analysis of 12 genes in formalin-fixed, paraffin-embedded (FFPE) tissue by reverse transcription-quantitative real-time PCR (RT-qPCR). Recently, it was shown that EP is feasible for reliable decentralized assessment of gene expression. The aim of this study was the analytical validation of the performance characteristics of the assay and its verification in a molecular-pathological routine laboratory. Gene expression values to calculate the EP score were assayed by one-step RT-qPCR using RNA from FFPE tumor tissue. Limit of blank, limit of detection, linear range, and PCR efficiency were assessed for each of the 12 PCR assays using serial samples dilutions. Different breast cancer samples were used to evaluate RNA input range, precision and inter-laboratory variability. PCR assays were linear up to C q values between 35.1 and 37.2. Amplification efficiencies ranged from 75% to 101%. The RNA input range without considerable change of the EP score was between 0.16 and 18.5 ng/μl. Analysis of precision (variation of day, day time, instrument, operator, reagent lots) resulted in a total noise (standard deviation) of 0.16 EP score units on a scale from 0 to 15. The major part of the total noise (SD 0.14) was caused by the replicate-to-replicate noise of the PCR assays (repeatability) and was not associated with different operating conditions (reproducibility). Performance characteristics established in the manufacturer’s laboratory were verified in a routine molecular pathology laboratory. Comparison of 10 tumor samples analyzed in two different laboratories showed a Pearson coefficient of 0.995 and a mean deviation of 0.15 score units. The EP test showed reproducible performance characteristics with good precision and negligible laboratory

  16. Analytical expressions for transition edge sensor excess noise models

    International Nuclear Information System (INIS)

    Brandt, Daniel; Fraser, George W.

    2010-01-01

    Transition edge sensors (TESs) are high-sensitivity thermometers used in cryogenic microcalorimeters which exploit the steep gradient in resistivity with temperature during the superconducting phase transition. Practical TES devices tend to exhibit a white noise of uncertain origin, arising inside the device. We discuss two candidate models for this excess noise, phase slip shot noise (PSSN) and percolation noise. We extend the existing PSSN model to include a magnetic field dependence and derive a basic analytical model for percolation noise. We compare the predicted functional forms of the noise current vs. resistivity curves of both models with experimental data and provide a set of equations for both models to facilitate future experimental efforts to clearly identify the source of excess noise.

  17. Diffusion weighted MRI by spatiotemporal encoding: Analytical description and in vivo validations

    Science.gov (United States)

    Solomon, Eddy; Shemesh, Noam; Frydman, Lucio

    2013-07-01

    Diffusion-weighted (DW) MRI is a powerful modality for studying microstructure in normal and pathological tissues. The accuracy derived from DW MRI depends on the acquisition of quality images, and on a precise assessment of the b-values involved. Conventional DW MRI tends to be of limited use in regions suffering from large magnetic field or chemical shift heterogeneities, which severely distort the MR images. In this study we propose novel sequences based on SPatio-temporal ENcoding (SPEN), which overcome such shortcomings owing to SPEN's inherent robustness to offsets. SPEN, however, relies on the simultaneous application of gradients and radiofrequency-swept pulses, which may impart different diffusion weightings along the spatial axes. These will be further complicated in DW measurements by the diffusion-sensitizing gradients, and will in general lead to complex, spatially-dependent b-values. This study presents a formalism for analyzing these diffusion-weighted SPEN (dSPEN) data, which takes into account the concomitant effects of adiabatic pulses, of the imaging as well as diffusion gradients, and of the cross-terms between them. These analytical b-values derivations are subject to experimental validations in phantom systems and ex vivo spinal cords. Excellent agreement is found between the theoretical predictions and these dSPEN experiments. The ensuing methodology is then demonstrated by in vivo mapping of diffusion in human breast - organs where conventional k-space DW acquisition methods are challenged by both field and chemical shift heterogeneities. These studies demonstrate the increased robustness of dSPEN vis-à-vis comparable DW echo planar imaging, and demonstrate the value of this new methodology for medium- or high-field diffusion measurements in heterogeneous systems.

  18. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  19. Analytical modeling of glucose biosensors based on carbon nanotubes.

    Science.gov (United States)

    Pourasl, Ali H; Ahmadi, Mohammad Taghi; Rahmani, Meisam; Chin, Huei Chaeng; Lim, Cheng Siong; Ismail, Razali; Tan, Michael Loong Peng

    2014-01-15

    In recent years, carbon nanotubes have received widespread attention as promising carbon-based nanoelectronic devices. Due to their exceptional physical, chemical, and electrical properties, namely a high surface-to-volume ratio, their enhanced electron transfer properties, and their high thermal conductivity, carbon nanotubes can be used effectively as electrochemical sensors. The integration of carbon nanotubes with a functional group provides a good and solid support for the immobilization of enzymes. The determination of glucose levels using biosensors, particularly in the medical diagnostics and food industries, is gaining mass appeal. Glucose biosensors detect the glucose molecule by catalyzing glucose to gluconic acid and hydrogen peroxide in the presence of oxygen. This action provides high accuracy and a quick detection rate. In this paper, a single-wall carbon nanotube field-effect transistor biosensor for glucose detection is analytically modeled. In the proposed model, the glucose concentration is presented as a function of gate voltage. Subsequently, the proposed model is compared with existing experimental data. A good consensus between the model and the experimental data is reported. The simulated data demonstrate that the analytical model can be employed with an electrochemical glucose sensor to predict the behavior of the sensing mechanism in biosensors.

  20. An analytical model for enantioseparation process in capillary electrophoresis

    Science.gov (United States)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  1. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A

    2009-05-27

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.

  2. Parametric study of a turbocompound diesel engine based on an analytical model

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yin, Yong; Zhao, Yanting; Chen, Zhen

    2016-01-01

    Turbocompounding is an important technique to recover waste heat from engine exhaust and reduce CO_2 emission. This paper presents a parametric study of turbocompound diesel engine based on analytical model. An analytical model was developed to investigate the influence of system parameters on the engine fuel consumption. The model is based on thermodynamics knowledge and empirical models, which can consider the impacts of each parameter independently. The effects of turbine efficiency, back pressure, exhaust temperature, pressure ratio and engine speed on the recovery energy, pumping loss and engine fuel reductions were studied. Results show that turbine efficiency, exhaust temperature and back pressure has great influence on the fuel reduction and optimal power turbine (PT) expansion ratio. However, engine operation speed has little impact on the fuel savings obtained by turbocompounding. The interaction mechanism between the PT recovery power and engine pumping loss is presented in the paper. Due to the nonlinear characteristic of turbine power, there is an optimum value of PT expansion ratio to achieve largest power gain. At the end, the fuel saving potential of high performance turbocompound engine and the requirements for it are proposed in the paper. - Highlights: • An analytical model for turbocompound engine is developed and validated. • Parametric study is performed to obtain lowest BSFC and optimal expansion ratio. • The influences of each parameter on the fuel saving potentials are presented. • The impact mechanisms of each parameter on the energy tradeoff are disclosed. • It provides an effective tool to guide the preliminary design of turbocompounding.

  3. Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation

    Science.gov (United States)

    Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun

    2018-03-01

    A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).

  4. A modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array

    International Nuclear Information System (INIS)

    Liang, Guanhao; Wang, Yancheng; Mei, Deqing; Xi, Kailun; Chen, Zichen

    2015-01-01

    This paper presents a modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array, which utilizes solid polydimethylsiloxane (PDMS) film as the dielectric layer. To predict the deformation of the sensing unit and capacitance changes, each sensing unit is simplified into a three-layer plate structure and divided into central, edge and corner regions. The plate structure and the three regions are studied by the general and modified models, respectively. For experimental validation, the capacitive tactile sensor array with 8  ×  8 (= 64) sensing units is fabricated. Experiments are conducted by measuring the capacitance changes versus applied external forces and compared with the general and modified models’ predictions. For the developed tactile sensor array, the sensitivity predicted by the modified analytical model is 1.25%/N, only 0.8% discrepancy from the experimental measurement. Results demonstrate that the modified analytical model can accurately predict the sensing performance of the sensor array and could be utilized for model-based optimal capacitive tactile sensor array design. (paper)

  5. Natural analogues and radionuclide transport model validation

    International Nuclear Information System (INIS)

    Lever, D.A.

    1987-08-01

    In this paper, some possible roles for natural analogues are discussed from the point of view of those involved with the development of mathematical models for radionuclide transport and with the use of these models in repository safety assessments. The characteristic features of a safety assessment are outlined in order to address the questions of where natural analogues can be used to improve our understanding of the processes involved and where they can assist in validating the models that are used. Natural analogues have the potential to provide useful information about some critical processes, especially long-term chemical processes and migration rates. There is likely to be considerable uncertainty and ambiguity associated with the interpretation of natural analogues, and thus it is their general features which should be emphasized, and models with appropriate levels of sophistication should be used. Experience gained in modelling the Koongarra uranium deposit in northern Australia is drawn upon. (author)

  6. Analytical Modelling of Wireless Power Transfer (WPT) Systems for Electric Vehicle Application

    Energy Technology Data Exchange (ETDEWEB)

    Chinthavali, Madhu Sudhan [ORNL; Campbell, Steven L [ORNL

    2016-01-01

    This paper presents an analytical model for wireless power transfer system used in electric vehicle application. The equivalent circuit model for each major component of the system is described, including the input voltage source, resonant network, transformer, nonlinear diode rectifier load, etc. Based on the circuit model, the primary side compensation capacitance, equivalent input impedance, active / reactive power are calculated, which provides a guideline for parameter selection. Moreover, the voltage gain curve from dc output to dc input is derived as well. A hardware prototype with series-parallel resonant stage is built to verify the developed model. The experimental results from the hardware are compared with the model predicted results to show the validity of the model.

  7. A semi-analytic model of magnetized liner inertial fusion

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Ryan D.; Slutz, Stephen A. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2015-05-15

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primary fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.

  8. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  9. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  10. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  11. Analytic nuclear scattering theories

    International Nuclear Information System (INIS)

    Di Marzio, F.; University of Melbourne, Parkville, VIC

    1999-01-01

    A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed

  12. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  13. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  14. External validation of EPIWIN biodegradation models.

    Science.gov (United States)

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  15. Untangling Slab Dynamics Using 3-D Numerical and Analytical Models

    Science.gov (United States)

    Holt, A. F.; Royden, L.; Becker, T. W.

    2016-12-01

    Increasingly sophisticated numerical models have enabled us to make significant strides in identifying the key controls on how subducting slabs deform. For example, 3-D models have demonstrated that subducting plate width, and the related strength of toroidal flow around the plate edge, exerts a strong control on both the curvature and the rate of migration of the trench. However, the results of numerical subduction models can be difficult to interpret, and many first order dynamics issues remain at least partially unresolved. Such issues include the dominant controls on trench migration, the interdependence of asthenospheric pressure and slab dynamics, and how nearby slabs influence each other's dynamics. We augment 3-D, dynamically evolving finite element models with simple, analytical force-balance models to distill the physics associated with subduction into more manageable parts. We demonstrate that for single, isolated subducting slabs much of the complexity of our fully numerical models can be encapsulated by simple analytical expressions. Rates of subduction and slab dip correlate strongly with the asthenospheric pressure difference across the subducting slab. For double subduction, an additional slab gives rise to more complex mantle pressure and flow fields, and significantly extends the range of plate kinematics (e.g., convergence rate, trench migration rate) beyond those present in single slab models. Despite these additional complexities, we show that much of the dynamics of such multi-slab systems can be understood using the physics illuminated by our single slab study, and that a force-balance method can be used to relate intra-plate stress to viscous pressure in the asthenosphere and coupling forces at plate boundaries. This method has promise for rapid modeling of large systems of subduction zones on a global scale.

  16. Development and Validation of a Learning Analytics Framework: Two Case Studies Using Support Vector Machines

    Science.gov (United States)

    Ifenthaler, Dirk; Widanapathirana, Chathuranga

    2014-01-01

    Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also…

  17. Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM

    Directory of Open Access Journals (Sweden)

    Łapczyński Mariusz

    2014-08-01

    Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.

  18. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  19. Analytic modeling of the feedback stabilization of resistive wall modes

    International Nuclear Information System (INIS)

    Pustovitov, Vladimir D.

    2003-01-01

    Feedback suppression of resistive wall modes (RWM) is studied analytically using a model based on a standard cylindrical approximation. Optimal choice of the input signal for the feedback, effects related to the geometry of the feedback active coils, RWM suppression in a configuration with ITER-like double wall, are considered here. The widespread opinion that the feedback with poloidal sensors is better than that with radial sensors is discussed. It is shown that for an ideal feedback system the best input signal would be a combination of radial and poloidal perturbations measured inside the vessel. (author)

  20. An analytic model for flow reversal in divertor plasmas

    International Nuclear Information System (INIS)

    Cooke, P.I.H.; Prinja, A.K.

    1987-04-01

    An analytic model is developed and used to study the phenomenon of flow reversal which is observed in two-dimensional simulations of divertor plasmas. The effect is shown to be caused by the radial spread of neutral particles emitted from the divertor target which can lead to a strong peaking of the ionization source at certain radial locations. The results indicate that flow reversal over a portion of the width of the scrape-off layer is inevitable in high recycling conditions. Implications for impurity transport and particle removal in reactors are discussed

  1. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  2. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    Science.gov (United States)

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  3. Analytical calculation of detailed model parameters of cast resin dry-type transformers

    International Nuclear Information System (INIS)

    Eslamian, M.; Vahidi, B.; Hosseinian, S.H.

    2011-01-01

    Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.

  4. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    Science.gov (United States)

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  5. Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    International Nuclear Information System (INIS)

    Li Jin; Liu Hongxia; Li Bin; Cao Lei; Yuan Bo

    2010-01-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a 'rollup' in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations. (semiconductor devices)

  6. Optimizing multi-pinhole SPECT geometries using an analytical model

    International Nuclear Information System (INIS)

    Rentmeester, M C M; Have, F van der; Beekman, F J

    2007-01-01

    State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used

  7. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  8. Analytic model of Applied-B ion diode impedance behavior

    International Nuclear Information System (INIS)

    Miller, P.A.; Mendel, C.W. Jr.

    1987-01-01

    An empirical analysis of impedance data from Applied-B ion diodes used in seven inertial confinement fusion research experiments was published recently. The diodes all operated with impedance values well below the Child's-law value. The analysis uncovered an unusual unifying relationship among data from the different experiments. The analysis suggested that closure of the anode-cathode gap by electrode plasma was not a dominant factor in the experiments, but was not able to elaborate the underlying physics. Here we present a new analytic model of Applied-B ion diodes coupled to accelerators. A critical feature of the diode model is based on magnetic insulation theory. The model successfully describes impedance behavior of these diodes and supports stimulating new viewpoints of the physics of Applied-B ion diode operation

  9. Simplified analytical model for radionuclide transport simulation in the geosphere

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1996-01-01

    In order to evaluate postclosure off-site doses from a low-level radioactive waste disposal facilities, an integrated safety assessment methodology has being developed at Instituto de Pesquisas Energeticas e Nucleares. The source-term modelling approach adopted in this system is described and the results obtained in the IAEA NSARS 'The Safety Assessment of Near-Surface Radioactive Waste Disposal Facilities' programme for model intercomparison studies are presented. The radionuclides released from the waste are calculated using a simple first order kinetics model, and the transport, through porous media below the waste is determined by using an analytical solution of the mass transport equation. The methodology and the results obtained in this work are compared with those reported by others participants of the NSARS programme. (author). 4 refs., 4 figs

  10. Analytic modeling, simulation and interpretation of broadband beam coupling impedance bench measurements

    Energy Technology Data Exchange (ETDEWEB)

    Niedermayer, U., E-mail: niedermayer@temf.tu-darmstadt.de [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Eidam, L. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Boine-Frankenheim, O. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); GSI Helmholzzentrum für Schwerionenforschung, Planckstraße 1, 64291 Darmstadt (Germany)

    2015-03-11

    First, a generalized theoretical approach towards beam coupling impedances and stretched-wire measurements is introduced. Applied to a circular symmetric setup, this approach allows to compare beam and wire impedances. The conversion formulas for TEM scattering parameters from measurements to impedances are thoroughly analyzed and compared to the analytical beam impedance solution. A proof of validity for the distributed impedance formula is given. The interaction of the beam or the TEM wave with dispersive material such as ferrite is discussed. The dependence of the obtained beam impedance on the relativistic velocity β is investigated and found as material property dependent. Second, numerical simulations of wakefields and scattering parameters are compared. The applicability of scattering parameter conversion formulas for finite device length is investigated. Laboratory measurement results for a circularly symmetric test setup, i.e. a ferrite ring, are shown and compared to analytic and numeric models. The optimization of the measurement process and error reduction strategies are discussed.

  11. Analytical validation of a melanoma diagnostic gene signature using formalin-fixed paraffin-embedded melanocytic lesions.

    Science.gov (United States)

    Warf, M Bryan; Flake, Darl D; Adams, Doug; Gutin, Alexander; Kolquist, Kathryn A; Wenstrup, Richard J; Roa, Benjamin B

    2015-01-01

    These studies were to validate the analytical performance of a gene expression signature that differentiates melanoma and nevi, using RNA expression from 14 signature genes and nine normalization genes that generates a melanoma diagnostic score (MDS). Formalin-fixed paraffin-embedded melanocytic lesions were evaluated in these studies. The overall SD of the assay was determined to be 0.69 MDS units. Individual amplicons within the signature had an average amplification efficiency of 92% and a SD less than 0.5 CT. The MDS was reproducible across a 2000-fold dilution range of input RNA. Melanin, an inhibitor of PCR, does not interfere with the signature. These studies indicate this signature is robust and reproducible and is analytically validated on formalin-fixed paraffin-embedded melanocytic lesions.

  12. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  13. Analytic Models of Brown Dwarfs and the Substellar Mass Limit

    Directory of Open Access Journals (Sweden)

    Sayantan Auddy

    2016-01-01

    Full Text Available We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064M⊙–0.087M⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr.

  14. Analytical Modeling for Underground Risk Assessment in Smart Cities

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available In the developed world, underground facilities are increasing day-by-day, as it is considered as an improved utilization of available space in smart cities. Typical facilities include underground railway lines, electricity lines, parking lots, water supply systems, sewerage network, etc. Besides its utility, these facilities also pose serious threats to citizens and property. To preempt accidental loss of precious human lives and properties, a real time monitoring system is highly desirable for conducting risk assessment on continuous basis and timely report any abnormality before its too late. In this paper, we present an analytical formulation to model system behavior for risk analysis and assessment based on various risk contributing factors. Based on proposed analytical model, we have evaluated three approximation techniques for computing final risk index: (a simple linear approximation based on multiple linear regression analysis; (b hierarchical fuzzy logic based technique in which related risk factors are combined in a tree like structure; and (c hybrid approximation approach which is a combination of (a and (b. Experimental results shows that simple linear approximation fails to accurately estimate final risk index as compared to hierarchical fuzzy logic based system which shows that the latter provides an efficient method for monitoring and forecasting critical issues in the underground facilities and may assist in maintenance efficiency as well. Estimation results based on hybrid approach fails to accurately estimate final risk index. However, hybrid scheme reveals some interesting and detailed information by performing automatic clustering based on location risk index.

  15. An analytically tractable model for community ecology with many species

    Science.gov (United States)

    Dickens, Benjamin; Fisher, Charles; Mehta, Pankaj; Pankaj Mehta Biophysics Theory Group Team

    A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration yield observed patterns in species composition and diversity. Here, we present an analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, diversity in competition, and stochasticity affect species composition in a community. In our PA model, we treat species as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special (``critical'') point corresponding to Hubbell's neutral theory of biodiversity. Our results suggest that the concepts of ``phases'' and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly. Pm was supported by a Simons Investigator in the Mathematical Modeling of Living Systems and a Sloan Research Fellowship.

  16. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang, Yang; Endreny, Theodore A.; Nowak, David J.

    2013-09-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat storage based on semiempirical functions and generates spatially distributed estimates based on inputs of topography, land cover, and the weather data measured at a reference site. The model assumes that for all grids under the same mesoscale climate, grid air temperature and humidity are modified by local variation in absorbed solar radiation and the partitioning of sensible and latent heat. The model uses a reference grid site for time series meteorological data and the air temperature and humidity of any other grid can be obtained by solving the heat flux network equations. PASATH was coupled with the USDA iTree-Hydro water balance model to obtain evapotranspiration terms and run from 20 to 29 August 2010 at a 360 m by 360 m grid scale and hourly time step across a 285 km2 watershed including the urban area of Syracuse, NY. PASATH predictions were tested at nine urban weather stations representing variability in urban topography and land cover. The PASATH model predictive efficiency R2 ranged from 0.81 to 0.99 for air temperature and 0.77 to 0.97 for dew point temperature. PASATH is expected to have broad applications on environmental and ecological models.

  17. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  18. High Turbidity Solis Clear Sky Model: Development and Validation

    Directory of Open Access Journals (Sweden)

    Pierre Ineichen

    2018-03-01

    Full Text Available The Solis clear sky model is a spectral scheme based on radiative transfer calculations and the Lambert–Beer relation. Its broadband version is a simplified fast analytical version; it is limited to broadband aerosol optical depths lower than 0.45, which is a weakness when applied in countries with very high turbidity such as China or India. In order to extend the use of the original simplified version of the model for high turbidity values, we developed a new version of the broadband Solis model based on radiative transfer calculations, valid for turbidity values up to 7, for the three components, global, beam, and diffuse, and for the four aerosol types defined by Shettle and Fenn. A validation of low turbidity data acquired in Geneva shows slightly better results than the previous version. On data acquired at sites presenting higher turbidity data, the bias stays within ±4% for the beam and the global irradiances, and the standard deviation around 5% for clean and stable condition data and around 12% for questionable data and variable sky conditions.

  19. Critical evaluation of analytical models for stochastic heating in dual-frequency capacitive discharges

    International Nuclear Information System (INIS)

    Sharma, S; Turner, M M

    2013-01-01

    Dual-frequency capacitive discharges are widespread in the semiconductor industry and are used, for example, in etching of semiconductor materials to manufacture microchips. In low-pressure dual radio-frequency capacitive discharges, stochastic heating is an important phenomenon. Recent theoretical work on this problem using several different approaches has produced results that are broadly in agreement insofar as scaling with the discharge parameters is concerned, but there remains some disagreement in detail concerning the absolute size of the effect for the case of dual-frequency capacitive discharges. In this work, we investigate the dependence of stochastic heating on various discharge parameters with the help of particle-in-cell (PIC) simulation. The dual-frequency analytical models are in fair agreement with PIC results for values of the low-frequency current density amplitude J lf (or dimensionless control parameter H lf ∼ 5) typical of many modern experiments. However, for higher values of J lf (or higher H lf ), new physical phenomena (like field reversal, reflection of ions, etc) appear and the simulation results deviate from existing dual-frequency analytical models. On the other hand, for lower J lf (or lower H lf ) again the simulation results deviate from analytical models. So this research work produces a relatively extensive set of simulation data that may be used to validate theories over a wide range of parameters. (paper)

  20. Analytical Modeling Approach to Study Harmonic Mitigation in AC Grids with Active Impedance at Selective Frequencies

    Directory of Open Access Journals (Sweden)

    Gonzalo Abad

    2018-05-01

    Full Text Available This paper presents an analytical model, oriented to study harmonic mitigation aspects in AC grids. As it is well known, the presence of non-desired harmonics in AC grids can be palliated in several manners. However, in this paper, a power electronic-based active impedance at selective frequencies (ACISEF is used, due to its already proven flexibility and adaptability to the changing characteristics of AC grids. Hence, the proposed analytical model approach is specially conceived to globally consider both the model of the AC grid itself with its electric equivalent impedances, together with the power electronic-based ACISEF, including its control loops. In addition, the proposed analytical model presents practical and useful properties, as it is simple to understand and simple to use, it has low computational cost and simple adaptability to different scenarios of AC grids, and it provides an accurate enough representation of the reality. The benefits of using the proposed analytical model are shown in this paper through some examples of its usefulness, including an analysis of stability and the identification of sources of instability for a robust design, an analysis of effectiveness in harmonic mitigation, an analysis to assist in the choice of the most suitable active impedance under a given state of the AC grid, an analysis of the interaction between different compensators, and so on. To conclude, experimental validation of a 2.15 kA ACISEF in a real 33 kV AC grid is provided, in which real users (household and industry loads and crucial elements such as wind parks and HVDC systems are near inter-connected.

  1. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 2: Appendices

    Science.gov (United States)

    Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.

    1993-05-01

    This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.

  2. A validated physical model of greenhouse climate

    International Nuclear Information System (INIS)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the greenhouse and of the control system. The greenhouse model is based on the energy, water vapour and CO 2 balances of the crop-greenhouse system. While the emphasis is on the dynamic behaviour of the greenhouse for implementation in continuous optimization, the state variables temperature, water vapour pressure and carbondioxide concentration in the relevant greenhouse parts crop, air, soil and cover are calculated from the balances over these parts. To do this in a proper way, the physical exchange processes between the system parts have to be quantified first. Therefore the greenhouse model is constructed from submodels describing these processes: a. Radiation transmission model for the modification of the outside to the inside global radiation. b. Ventilation model to describe the ventilation exchange between greenhouse and outside air. c. The description of the exchange of energy and mass between the crop and the greenhouse air. d. Calculation of the thermal radiation exchange between the various greenhouse parts. e. Quantification of the convective exchange processes between the greenhouse air and respectively the cover, the heating pipes and the soil surface and between the cover and the outside air. f. Determination of the heat conduction in the soil. The various submodels are validated first and then the complete greenhouse model is verified

  3. Testing a 1-D Analytical Salt Intrusion Model and the Predictive Equation in Malaysian Estuaries

    Science.gov (United States)

    Gisen, Jacqueline Isabella; Savenije, Hubert H. G.

    2013-04-01

    Little is known about the salt intrusion behaviour in Malaysian estuaries. Study on this topic sometimes requires large amounts of data especially if a 2-D or 3-D numerical models are used for analysis. In poor data environments, 1-D analytical models are more appropriate. For this reason, a fully analytical 1-D salt intrusion model, based on the theory of Savenije in 2005, was tested in three Malaysian estuaries (Bernam, Selangor and Muar) because it is simple and requires minimal data. In order to achieve that, site surveys were conducted in these estuaries during the dry season (June-August) at spring tide by moving boat technique. Data of cross-sections, water levels and salinity were collected, and then analysed with the salt intrusion model. This paper demonstrates a good fit between the simulated and observed salinity distribution for all three estuaries. Additionally, the calibrated Van der Burgh's coefficient K, Dispersion coefficient D0, and salt intrusion length L, for the estuaries also displayed a reasonable correlations with those calculated from the predictive equations. This indicates that not only is the salt intrusion model valid for the case studies in Malaysia but also the predictive model. Furthermore, the results from this study describe the current state of the estuaries with which the Malaysian water authority in Malaysia can make decisions on limiting water abstraction or dredging. Keywords: salt intrusion, Malaysian estuaries, discharge, predictive model, dispersion

  4. Analytical model for macromolecular partitioning during yeast cell division

    International Nuclear Information System (INIS)

    Kinkhabwala, Ali; Khmelinskii, Anton; Knop, Michael

    2014-01-01

    Asymmetric cell division, whereby a parent cell generates two sibling cells with unequal content and thereby distinct fates, is central to cell differentiation, organism development and ageing. Unequal partitioning of the macromolecular content of the parent cell — which includes proteins, DNA, RNA, large proteinaceous assemblies and organelles — can be achieved by both passive (e.g. diffusion, localized retention sites) and active (e.g. motor-driven transport) processes operating in the presence of external polarity cues, internal asymmetries, spontaneous symmetry breaking, or stochastic effects. However, the quantitative contribution of different processes to the partitioning of macromolecular content is difficult to evaluate. Here we developed an analytical model that allows rapid quantitative assessment of partitioning as a function of various parameters in the budding yeast Saccharomyces cerevisiae. This model exposes quantitative degeneracies among the physical parameters that govern macromolecular partitioning, and reveals regions of the solution space where diffusion is sufficient to drive asymmetric partitioning and regions where asymmetric partitioning can only be achieved through additional processes such as motor-driven transport. Application of the model to different macromolecular assemblies suggests that partitioning of protein aggregates and episomes, but not prions, is diffusion-limited in yeast, consistent with previous reports. In contrast to computationally intensive stochastic simulations of particular scenarios, our analytical model provides an efficient and comprehensive overview of partitioning as a function of global and macromolecule-specific parameters. Identification of quantitative degeneracies among these parameters highlights the importance of their careful measurement for a given macromolecular species in order to understand the dominant processes responsible for its observed partitioning

  5. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  6. Analytical Model for the End-Bearing Capacity of Tapered Piles Using Cavity Expansion Theory

    Directory of Open Access Journals (Sweden)

    Suman Manandhar

    2012-01-01

    Full Text Available On the basis of evidence from model tests on increasing the end-bearing behavior of tapered piles at the load-settlement curve, this paper proposes an analytical spherical cavity expansion theory to evaluate the end-bearing capacity. The angle of tapering is inserted in the proposed model to evaluate the end-bearing capacity. The test results of the proposed model in different types of sands and different relative densities show good effects compared to conventional straight piles. The end-bearing capacity increases with increases in the tapering angle. The paper then propounds a model for prototypes and real-type pile tests which predicts and validates to evaluate the end-bearing capacity.

  7. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  8. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  9. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  10. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    Science.gov (United States)

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  11. Analytical model of diffuse reflectance spectrum of skin tissue

    Energy Technology Data Exchange (ETDEWEB)

    Lisenko, S A; Kugeiko, M M; Firago, V A [Belarusian State University, Minsk (Belarus); Sobchuk, A N [B.I. Stepanov Institute of Physics, National Academy of Sciences of Belarus, Minsk (Belarus)

    2014-01-31

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions. (biophotonics)

  12. Exploring Higher Education Governance: Analytical Models and Heuristic Frameworks

    Directory of Open Access Journals (Sweden)

    Burhan FINDIKLI

    2017-08-01

    Full Text Available Governance in higher education, both at institutional and systemic levels, has experienced substantial changes within recent decades because of a range of world-historical processes such as massification, growth, globalization, marketization, public sector reforms, and the emergence of knowledge economy and society. These developments have made governance arrangements and decision-making processes in higher education more complex and multidimensional more than ever and forced scholars to build new analytical and heuristic tools and strategies to grasp the intricacy and diversity of higher education governance dynamics. This article provides a systematic discussion of how and through which tools prominent scholars of higher education have analyzed governance in this sector by examining certain heuristic frameworks and analytical models. Additionally, the article shows how social scientific analysis of governance in higher education has proceeded in a cumulative way with certain revisions and syntheses rather than radical conceptual and theoretical ruptures from Burton R. Clark’s seminal work to the present, revealing conceptual and empirical junctures between them.

  13. The development of a decision analytic model of changes in mean deviation in people with glaucoma: the COA model.

    Science.gov (United States)

    Kymes, Steven M; Lambert, Dennis L; Lee, Paul P; Musch, David C; Siegfried, Carla J; Kotak, Sameer V; Stwalley, Dustin L; Fain, Joel; Johnson, Chris; Gordon, Mae O

    2012-07-01

    To create and validate a statistical model predicting progression of primary open-angle glaucoma (POAG) assessed by loss of visual field as measured in mean deviation (MD) using 3 landmark studies of glaucoma progression and treatment. A Markov decision analytic model using patient level data described longitudinal MD changes over 7 years. Patient-level data from the Collaborative Initial Glaucoma Treatment Study (n = 607), the Ocular Hypertension Treatment Study (OHTS; n = 148; only those who developed POAG in the first 5 years of OHTS) and Advanced Glaucoma Intervention Study (n = 591), the COA model. We developed a Markov model with transition matrices stratified by current MD, age, race, and intraocular pressure categories and used a microsimulation approach to estimate change in MD over 7 years. Internal validation compared model prediction for 7 years to actual MD for COA participants. External validation used a cohort of glaucoma patients drawn from university clinical practices. Change in visual field as measured in MD in decibels (dB). Regressing the actual MD against the predicted produced an R(2) of 0.68 for the right eye and 0.63 for the left. The model predicted ending MD for right eyes of 65% of participants and for 63% of left eyes within 3 dB of actual results at 7 years. In external validation the model had an R(2) of 0.79 in the right eye and 0.77 in the left at 5 years. The COA model is a validated tool for clinicians, patients, and health policy makers seeking to understand longitudinal changes in MD in people with glaucoma. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  14. Modelling and validation of electromechanical shock absorbers

    Science.gov (United States)

    Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico

    2013-08-01

    Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.

  15. Fuzzy modeling of analytical redundancy for sensor failure detection

    International Nuclear Information System (INIS)

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  16. Development, validation and evaluation of an analytical method for the determination of monomeric and oligomeric procyanidins in apple extracts.

    Science.gov (United States)

    Hollands, Wendy J; Voorspoels, Stefan; Jacobs, Griet; Aaby, Kjersti; Meisland, Ane; Garcia-Villalba, Rocio; Tomas-Barberan, Francisco; Piskula, Mariusz K; Mawson, Deborah; Vovk, Irena; Needs, Paul W; Kroon, Paul A

    2017-04-28

    There is a lack of data for individual oligomeric procyanidins in apples and apple extracts. Our aim was to develop, validate and evaluate an analytical method for the separation, identification and quantification of monomeric and oligomeric flavanols in apple extracts. To achieve this, we prepared two types of flavanol extracts from freeze-dried apples; one was an epicatechin-rich extract containing ∼30% (w/w) monomeric (-)-epicatechin which also contained oligomeric procyanidins (Extract A), the second was an oligomeric procyanidin-rich extract depleted of epicatechin (Extract B). The parameters considered for method optimisation were HPLC columns and conditions, sample heating, mass of extract and dilution volumes. The performance characteristics considered for method validation included standard linearity, method sensitivity, precision and trueness. Eight laboratories participated in the method evaluation. Chromatographic separation of the analytes was best achieved utilizing a Hilic column with a binary mobile phase consisting of acidic acetonitrile and acidic aqueous methanol. The final method showed linearity for epicatechin in the range 5-100μg/mL with a correlation co-efficient >0.999. Intra-day and inter-day precision of the analytes ranged from 2 to 6% and 2 to 13% respectively. Up to dp3, trueness of the method was >95% but decreased with increasing dp. Within laboratory precision showed RSD values <5 and 10% for monomers and oligomers, respectively. Between laboratory precision was 4 and 15% (Extract A) and 7 and 30% (Extract B) for monomers and oligomers, respectively. An analytical method for the separation, identification and quantification of procyanidins in an apple extract was developed, validated and assessed. The results of the inter-laboratory evaluation indicate that the method is reliable and reproducible. Copyright © 2017. Published by Elsevier B.V.

  17. Analytical validation of an ultraviolet-visible procedure for determining lutein concentration and application to lutein-loaded nanoparticles.

    Science.gov (United States)

    Silva, Jéssica Thaís do Prado; Silva, Anderson Clayton da; Geiss, Julia Maria Tonin; de Araújo, Pedro Henrique Hermes; Becker, Daniela; Bracht, Lívia; Leimann, Fernanda Vitória; Bona, Evandro; Guerra, Gustavo Petri; Gonçalves, Odinei Hess

    2017-09-01

    Lutein is a carotenoid presenting known anti-inflammatory and antioxidant properties. Lutein-rich diets have been associated with neurological improvement as well as reduction of the risk of vision loss due to Age-Related Macular Degeneration (AMD). Micro and nanoencapsulation have demonstrated to be effective techniques in protecting lutein against degradation and also in improving its bioavailability. However, actual lutein concentration inside the capsules and encapsulation efficiency are key parameters that must be precisely known when designing in vitro and in vivo tests. In this work an analytical procedure was validated for the determination of the actual lutein content in zein nanoparticles using ultraviolet-visible spectroscopy. Method validation followed the International Conference on Harmonisation (ICH) guidelines which evaluate linearity, detection limit, quantification limit, accuracy and precision. The validated methodology was applied to characterize lutein-loaded nanoparticles. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  19. Development and validation of analytical method for the estimation of nateglinide in rabbit plasma

    Directory of Open Access Journals (Sweden)

    Nihar Ranjan Pani

    2012-12-01

    Full Text Available Nateglinide has been widely used in the treatment of type-2 diabetics as an insulin secretogoga. A reliable, rapid, simple and sensitive reversed-phase high performance liquid chromatography (RP-HPLC method was developed and validated for determination of nateglinide in rabbit plasma. The method was developed on Hypersil BDSC-18 column (250 mm×4.6 mm, 5 mm using a mobile phase of 10 mM phosphate buffer (pH 2.5 and acetonitrile (35:65, v/v. The elute was monitored with the UV–vis detector at 210 nm with a flow rate of 1 mL/min. Calibration curve was linear over the concentration range of 25–2000 ng/mL. The retention times of nateglinide and internal standard (gliclazide were 9.608 min and 11.821 min respectively. The developed RP-HPLC method can be successfully applied to the quantitative pharmacokinetic parameters determination of nateglinide in rabbit model. Keywords: HPLC, Nateglinide, Rabbit plasma, Pharmacokinetics

  20. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  1. Analytical modeling of pressure transient behavior for coalbed methane transport in anisotropic media

    International Nuclear Information System (INIS)

    Wang, Lei; Wang, Xiaodong

    2014-01-01

    Resulting from the nature of anisotropy of coal media, it is a meaningful work to evaluate pressure transient behavior and flow characteristics within coals. In this article, a complete analytical model called the elliptical flow model is established by combining the theory of elliptical flow in anisotropic media and Fick's laws about the diffusion of coalbed methane. To investigate pressure transient behavior, analytical solutions were first obtained through introducing a series of special functions (Mathieu functions), which are extremely complex and are hard to calculate. Thus, a computer program was developed to establish type curves, on which the effects of the parameters, including anisotropy coefficient, storage coefficient, transfer coefficient and rate constant, were analyzed in detail. Calculative results show that the existence of anisotropy would cause great pressure depletion. To validate new analytical solutions, previous results were used to compare with the new results. It is found that a better agreement between the solutions obtained in this work and the literature was achieved. Finally, a case study is used to explain the effects of the parameters, including rock total compressibility coefficient, coal medium porosity and anisotropic permeability, sorption time constant, Langmuir volume and fluid viscosity, on bottom-hole pressure behavior. It is necessary to coordinate these parameters so as to reduce the pressure depletion. (paper)

  2. Impurities in biogas - validation of analytical methods for siloxanes; Foeroreningar i biogas - validering av analysmetodik foer siloxaner

    Energy Technology Data Exchange (ETDEWEB)

    Arrhenius, Karine; Magnusson, Bertil; Sahlin, Eskil [SP Technical Research Institute of Sweden, Boraas (Sweden)

    2011-11-15

    Biogas produced from digester or landfill contains impurities which can be harmful for component that will be in contact with the biogas during its utilization. Among these, the siloxanes are often mentioned. During combustion, siloxanes are converted to silicon dioxide which accumulates on the heated surfaces in combustion equipment. Silicon dioxide is a solid compound and will remain in the engine and cause damages. Consequently, it is necessary to develop methods for the accurate determination of these compounds in biogases. In the first part of this report, a method for analysis of siloxanes in biogases was validated. The sampling was performed directly at the plant by drawing a small volume of biogas onto an adsorbent tube under a short period of time. These tubes were subsequently sent to the laboratory for analysis. The purpose of method validation is to demonstrate that the established method is fit for the purpose. This means that the method, as used by the laboratory generating the data, will provide data that meets a set of criteria concerning precision and accuracy. At the end, the uncertainty of the method was calculated. In the second part of this report, the validated method was applied to real samples collected in waste water treatment plants, co-digestion plants and plants digesting other wastes (agriculture waste). Results are presented at the end of this report. As expected, the biogases from waste water treatment plants contained largely higher concentrations of siloxanes than biogases from co-digestion plants and plants digesting agriculture wastes. The concentration of siloxanes in upgraded biogas regardless of which feedstock was digested and which upgrading technique was used was low.

  3. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  4. An Analytical Model of Joule Heating in Piezoresistive Microcantilevers

    Directory of Open Access Journals (Sweden)

    Chongdu Cho

    2010-11-01

    Full Text Available The present study investigates Joule heating in piezoresistive microcantilever sensors. Joule heating and thermal deflections are a major source of noise in such sensors. This work uses analytical and numerical techniques to characterise the Joule heating in 4-layer piezoresistive microcantilevers made of silicon and silicon dioxide substrates but with the same U-shaped silicon piezoresistor. A theoretical model for predicting the temperature generated due to Joule heating is developed. The commercial finite element software ANSYS Multiphysics was used to study the effect of electrical potential on temperature and deflection produced in the cantilevers. The effect of piezoresistor width on Joule heating is also studied. Results show that Joule heating strongly depends on the applied potential and width of piezoresistor and that a silicon substrate cantilever has better thermal characteristics than a silicon dioxide cantilever.

  5. An analytical model of joule heating in piezoresistive microcantilevers.

    Science.gov (United States)

    Ansari, Mohd Zahid; Cho, Chongdu

    2010-01-01

    The present study investigates Joule heating in piezoresistive microcantilever sensors. Joule heating and thermal deflections are a major source of noise in such sensors. This work uses analytical and numerical techniques to characterise the Joule heating in 4-layer piezoresistive microcantilevers made of silicon and silicon dioxide substrates but with the same U-shaped silicon piezoresistor. A theoretical model for predicting the temperature generated due to Joule heating is developed. The commercial finite element software ANSYS Multiphysics was used to study the effect of electrical potential on temperature and deflection produced in the cantilevers. The effect of piezoresistor width on Joule heating is also studied. Results show that Joule heating strongly depends on the applied potential and width of piezoresistor and that a silicon substrate cantilever has better thermal characteristics than a silicon dioxide cantilever.

  6. Analytical modeling of wet compression of gas turbine systems

    International Nuclear Information System (INIS)

    Kim, Kyoung Hoon; Ko, Hyung-Jong; Perez-Blanco, Horacio

    2011-01-01

    Evaporative gas turbine cycles (EvGT) are of importance to the power generation industry because of the potential of enhanced cycle efficiencies with moderate incremental cost. Humidification of the working fluid to result in evaporative cooling during compression is a key operation in these cycles. Previous simulations of this operation were carried out via numerical integration. The present work is aimed at modeling the wet-compression process with approximate analytical solutions instead. A thermodynamic analysis of the simultaneous heat and mass transfer processes that occur during evaporation is presented. The transient behavior of important variables in wet compression such as droplet diameter, droplet mass, gas and droplet temperature, and evaporation rate is investigated. The effects of system parameters on variables such as droplet evaporation time, compressor outlet temperature and input work are also considered. Results from this work exhibit good agreement with those of previous numerical work.

  7. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  8. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  9. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    We present a design method of homogeneous trench-assisted multicore fibers (TA-MCFs) based on an analytical model utilizing an analytical expression for the mode coupling coefficient between two adjacent cores. The analytical model can also be used for crosstalk (XT) properties analysis, such as ...

  10. 33 CFR 385.33 - Revisions to models and analytical tools.

    Science.gov (United States)

    2010-07-01

    ... on a case-by-case basis what documentation is appropriate for revisions to models and analytic tools... analytical tools. 385.33 Section 385.33 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE... Incorporating New Information Into the Plan § 385.33 Revisions to models and analytical tools. (a) In carrying...

  11. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    Science.gov (United States)

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  12. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    Science.gov (United States)

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Science.gov (United States)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  14. A theoretical study of CsI:Tl columnar scintillator image quality parameters by analytical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalyvas, N., E-mail: nkalyvas@teiath.gr; Valais, I.; Michail, C.; Fountos, G.; Kandarakis, I.; Cavouras, D.

    2015-04-11

    Medical X-ray digital imaging systems such as mammography, radiography and computed tomography (CT), are composed from efficient radiation detectors, which can transform the X-rays to electron signal. Scintillators are materials that emit light when excited by X-rays and incorporated in X-ray medical imaging detectors. Columnar scintillator, like CsI:T1 is very often used for X-ray detection due to its higher performance. The columnar form limits the lateral spread of the optical photons to the scintillator output, thus it demonstrates superior spatial resolution compared to granular scintillators. The aim of this work is to provide an analytical model for calculating the MTF, the DQE and the emission efficiency of a columnar scintillator. The model parameters were validated against published Monte Carlo data. The model was able to predict the overall performance of CsI:Tl scintillators and suggested an optimum thickness of 300 μm for radiography applications. - Highlights: • An analytical model for calculating MTF, DQE and Detector Optical Gain (DOG) of columnar phosphors was developed. • The model was fitted to published efficiency and MTF Monte Carlo data. • A good fit was observed for 300 µm columnar CsI:Tl thickness. • The performance of the 300 µm column thickness CsI:Tl was better in terms of MTF and DOG for radiographic applications.

  15. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  16. Validation of an analytical method for the determination of polycyclic aromatic hydrocarbons by high efficiency liquid chromatography in PM10 and PM2,5 particles

    International Nuclear Information System (INIS)

    Herrera Murillo, Jorge; Chaves Villalobos, Maria del Carmen

    2012-01-01

    An analytical method was validated for polycyclic aromatic hydrocarbons in PM10 and PM2,5 particles collected from air by high performance liquid chromatography (HPLC) was validated. The PAHs analyzed in the methodology include: Naphthalene, Acenaphthylene, Fluorene, Acenaphthene, Phenanthrene, Anthracene, fluoranthene, pyrene, Benzo (a)anthracene, Chrysene, Benzo (b)fluoranthene, Benzo (k)fluoranthene, Benzo (a)pyrene, Dibenzo (a, h)anthracene, Benzo (g, h, i)perylene and Indeno (1,2,3-CD)pyrene. For these compounds, the detection limit and quantification limit have been between 0,02 and 0,1 mg/l. An equipment DIONEX, ICS 3000 model is used, that has two in series detectors: one ultraviolet model VWD-1, and fluorescence detector, model RF-2000, separating the different absorption and emission signals for proper identification of individual compounds. For all the compounds analyzed, the recovery factor has found not significantly different from each other and the repeatability and reproducibility has been to be suitable for an analytical method, especially for the lighter PAHs. (author) [es

  17. A Complete Validated Learning Analytics Framework: Designing Issues from Data Preparation Perspective

    Science.gov (United States)

    Tlili, Ahmed; Essalmi, Fathi; Jemni, Mohamed; Kinshuk; Chen, Nian-Shing

    2018-01-01

    With the rapid growth of online education in recent years, Learning Analytics (LA) has gained increasing attention from researchers and educational institutions as an area which can improve the overall effectiveness of learning experiences. However, the lack of guidelines on what should be taken into consideration during application of LA hinders…

  18. A simulation-based analytic model of radio galaxies

    Science.gov (United States)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  19. Analytical modelling for ultrasonic surface mechanical attrition treatment

    Directory of Open Access Journals (Sweden)

    Guan-Rong Huang

    2015-07-01

    Full Text Available The grain refinement, gradient structure, fatigue limit, hardness, and tensile strength of metallic materials can be effectively enhanced by ultrasonic surface mechanical attrition treatment (SMAT, however, never before has SMAT been treated with rigorous analytical modelling such as the connection among the input energy and power and resultant temperature of metallic materials subjected to SMAT. Therefore, a systematic SMAT model is actually needed. In this article, we have calculated the averaged speed, duration time of a cycle, kinetic energy and kinetic energy loss of flying balls in SMAT for structural metallic materials. The connection among the quantities such as the frequency and amplitude of attrition ultrasonic vibration motor, the diameter, mass and density of balls, the sample mass, and the height of chamber have been considered and modelled in details. And we have introduced the one-dimensional heat equation with heat source within uniform-distributed depth in estimating the temperature distribution and heat energy of sample. In this approach, there exists a condition for the frequency of flying balls reaching a steady speed. With these known quantities, we can estimate the strain rate, hardness, and grain size of sample.

  20. Analytic models of NH4+ uptake and regeneration experiments

    International Nuclear Information System (INIS)

    Laws, E.A.

    1985-01-01

    Differential equations describing the uptake and regeneration of NH 4 + in both laboratory and field experiments are shown to have analytic solutions which can easily be inverted to determine the rate constants of interest. The solutions are used to study the descriptive ability of two fundamentally different models of NH 4 + cycling, one in which NH 4 + regeneration is regarded as a process that transfers N from participate N to NH 4 + , the other in which regeneration is regarded as a process that introduced NH 4 + to the dissolved phase without removing N from the particulate phase. The former model was found to give a good description of experimental field data and reasonable parameter values in all cases studied. The latter model was much less successful in describing the data and in producing reasonable parameter values. It is concluded that transfer of nitrogen from particulate N to NH 4 + is a process which must be taken into account in analyzing NH 4 + uptake and regeneration experiments

  1. Analytical modeling of bwr safety relief valve blowdown phenomenon

    International Nuclear Information System (INIS)

    Hwang, J.G.; Singh, A.

    1984-01-01

    An analytical, qualitative understanding of the pool pressures measured during safety relief valve discharge in boiling water reactors equipped with X-quenchers has been developed and compared to experimental data. A pressure trace typically consists of a brief 25-35 Hz. oscillation followed by longer 5-15 Hz. oscillation. In order to explain the pressure response, a discharge line vent clearing model has been coupled with a Rayleigh bubble dynamic model. The local conditions inside the safety relief valve discharge lines and inside of the X-quencher were simulated successfully with RELAP5. The simulation allows one to associate the peak pressure inside the quencher arm with the onset of air discharge into the suppression pool. Using the pressure and thermodynamic quality at quencher exit of RELAP5 calculation as input, a Rayleigh model of pool bubble dynamics has successfully explained both the higher and lower frequency pressure oscillations. The higher frequency oscillations are characteristic of an air bubble emanating from a single row of quencher holes. The lower frequency pressure oscillations are characteristic of a larger air bubble containing all the air expelled from one side of an X-quencher arm

  2. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  3. More on analytic bootstrap for O(N) models

    Energy Technology Data Exchange (ETDEWEB)

    Dey, Parijat; Kaviraj, Apratim; Sen, Kallol [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India)

    2016-06-22

    This note is an extension of a recent work on the analytical bootstrapping of O(N) models. An additonal feature of the O(N) model is that the OPE contains trace and antisymmetric operators apart from the symmetric-traceless objects appearing in the OPE of the singlet sector. This in addition to the stress tensor (T{sub μν}) and the ϕ{sub i}ϕ{sup i} scalar, we also have other minimal twist operators as the spin-1 current J{sub μ} and the symmetric-traceless scalar in the case of O(N). We determine the effect of these additional objects on the anomalous dimensions of the corresponding trace, symmetric-traceless and antisymmetric operators in the large spin sector of the O(N) model, in the limit when the spin is much larger than the twist. As an observation, we also verified that the leading order results for the large spin sector from the ϵ−expansion are an exact match with our n=0 case. A plausible holographic setup for the special case when N=2 is also mentioned which mimics the calculation in the CFT.

  4. MAAP4 model and validation status

    International Nuclear Information System (INIS)

    Plys, M.G.; Paik, C.Y.; Henry, R.E.; Wu, Chunder; Suh, K.Y.; Sung Jin Lee; McCartney, M.A.; Wang, Zhe

    1993-01-01

    The MAAP 4 code for integrated severe accident analysis is intended to be used for Level 1 and Level 2 probabilistic safety assessment and severe accident management evaluations for current and advanced light water reactors. MAAP 4 can be used to determine which accidents lead to fuel damage and which are successfully terminated which accidents lead to fuel damage and which are successfully terminated before or after fuel damage (a level 1 application). It can also be used to determine which sequences result in fission product release to the environment and provide the time history of such releases (a level 2 application). The MAAP 4 thermal-hydraulic and fission product models and their validation are discussed here. This code is the newest version of MAAP offered by the Electric Power Research Institute (EPRI) and it contains substantial mechanistic improvements over its predecessor, MAAP 3.0B

  5. Analytical solutions by squeezing to the anisotropic Rabi model in the nonperturbative deep-strong-coupling regime

    Science.gov (United States)

    Zhang, Yu-Yu; Chen, Xiang-You

    2017-12-01

    An unexplored nonperturbative deep strong coupling (npDSC) achieved in superconducting circuits has been studied in the anisotropic Rabi model by the generalized squeezing rotating-wave approximation. Energy levels are evaluated analytically from the reformulated Hamiltonian and agree well with numerical ones in a wide range of coupling strength. Such improvement ascribes to deformation effects in the displaced-squeezed state presented by the squeezed momentum variance, which are omitted in previous displaced states. The atom population dynamics confirms the validity of our approach for the npDSC strength. Our approach offers the possibility to explore interesting phenomena analytically in the npDSC regime in qubit-oscillator experiments.

  6. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    Science.gov (United States)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  7. Validation of A Global Hydrological Model

    Science.gov (United States)

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    due to the precipitation mea- surement errors. Even though the explicit modeling of wetlands and lakes leads to a much improved modeling of both the vertical water balance and the lateral transport of water, not enough information is included in WGHM to accurately capture the hy- drology of these water bodies. Certainly, the reliability of model results is highest at the locations at which WGHM was calibrated. The validation indicates that reliability for cells inside calibrated basins is satisfactory if the basin is relatively homogeneous. Analyses of the few available stations outside of calibrated basins indicate a reason- ably high model reliability, particularly in humid regions.

  8. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    Science.gov (United States)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  9. Analytical models of lateral power devices with arbitrary vertical doping profiles in the drift region

    International Nuclear Information System (INIS)

    Hua Ting-Ting; Guo Yu-Feng; Yu Ying; Jian Tong; Yao Jia-Fei; Sheu Gene

    2013-01-01

    By solving the 2D Poisson's equation, analytical models are proposed to calculate the surface potential and electric field distributions of lateral power devices with arbitrary vertical doping profiles. The vertical and the lateral breakdown voltages are formulized to quantify the breakdown characteristic in completely-depleted and partially-depleted cases. A new reduced surface field (RESURF) criterion which can be used in various drift doping profiles is further derived for obtaining the optimal trade-off between the breakdown voltage and the on-resistance. Based on these models and the numerical simulation, the electric field modulation mechanism and the breakdown characteristics of lateral power devices are investigated in detail for the uniform, linear, Gaussian, and some discrete doping profiles along the vertical direction in the drift region. Then, the mentioned vertical doping profiles of these devices with the same geometric parameters are optimized, and the results show that the optimal breakdown voltages and the effective drift doping concentrations of these devices are identical, which are equal to those of the uniform-doped device, respectively. The analytical results of these proposed models are in good agreement with the numerical results and the previous experimental results, confirming the validity of the models presented here. (interdisciplinary physics and related areas of science and technology)

  10. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Massoud, M

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients.

  11. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    International Nuclear Information System (INIS)

    Massoud, M.

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients

  12. Analytical models of lateral power devices with arbitrary vertical doping profiles in the drift region

    Science.gov (United States)

    Hua, Ting-Ting; Guo, Yu-Feng; Yu, Ying; Gene, Sheu; Jian, Tong; Yao, Jia-Fei

    2013-05-01

    By solving the 2D Poisson's equation, analytical models are proposed to calculate the surface potential and electric field distributions of lateral power devices with arbitrary vertical doping profiles. The vertical and the lateral breakdown voltages are formulized to quantify the breakdown characteristic in completely-depleted and partially-depleted cases. A new reduced surface field (RESURF) criterion which can be used in various drift doping profiles is further derived for obtaining the optimal trade-off between the breakdown voltage and the on-resistance. Based on these models and the numerical simulation, the electric field modulation mechanism and the breakdown characteristics of lateral power devices are investigated in detail for the uniform, linear, Gaussian, and some discrete doping profiles along the vertical direction in the drift region. Then, the mentioned vertical doping profiles of these devices with the same geometric parameters are optimized, and the results show that the optimal breakdown voltages and the effective drift doping concentrations of these devices are identical, which are equal to those of the uniform-doped device, respectively. The analytical results of these proposed models are in good agreement with the numerical results and the previous experimental results, confirming the validity of the models presented here.

  13. Adequacy and validation of an analytical method for the quantification of lead in chamomile tisanes produced in Costa Rica

    International Nuclear Information System (INIS)

    Blanco Barrantes, Jeimy

    2014-01-01

    An analytical methodology is developed and validated to quantify lead in chamomile tisanes. Lead is quantified by utilizing the technique of flame atomic absorption spectroscopy in three brands of chamomile tisanes sold in Costa Rica to determine its safety and quality based on international standards. A method of sample preparation is established through a comparison of different forms of extraction. The acid digestion extraction method has been the procedure utilized, reaching an average recovery percentage of 97,1%, with a standard deviation of 2,3%. The optimization of the chosen analytical procedure and complete validation is performed. The results obtained in the validation of the analytical procedure have shown that the interval where is generated the best calibration curve in terms of the correlation coefficient and the value of the statistically significant intercept equal to zero, have been the comprised between (0,2-3,2) μg/mL (r 2 =0,9996), corresponding to a range between 20% to 320% of the maximum allowed limit. In addition, the procedure has been adequate in terms of accuracy (average recovery percentage 101,1%) and precision under repeatability and intermediate precision (RSD max. 9,3%) and limit of quantification (0,2551 μg/mL). The safety criterion of World Health Organization (WHO) is determined with respect to the concentration of lead in the analyzed products. The 9 analyzed samples of products to prepare chamomile tisanes have stayed without evidencing concentrations of lead above the limit value of 10 μg/g suggested for medicinal herbs by WHO [es

  14. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  15. Validation of Prototype Continuous Real-Time Vital Signs Video Analytics Monitoring System CCATT Viewer

    Science.gov (United States)

    2018-01-26

    traditional monitors, this capability will facilitate management of a group of patients. Innovative visual analytics of the complex array of real-time...redundant system could be useful in managing hundreds of bedside monitor data sources. With too many data sources, a single central server may suffer...collection rate. 3.2 Viewer Elements Design For detailed elements to display, as well as their color, line styles , and locations on the screen, we

  16. A semi-analytical beam model for the vibration of railway tracks

    Science.gov (United States)

    Kostovasilis, D.; Thompson, D. J.; Hussein, M. F. M.

    2017-04-01

    The high frequency dynamic behaviour of railway tracks, in both vertical and lateral directions, strongly affects the generation of rolling noise as well as other phenomena such as rail corrugation. An improved semi-analytical model of a beam on an elastic foundation is introduced that accounts for the coupling of the vertical and lateral vibration. The model includes the effects of cross-section asymmetry, shear deformation, rotational inertia and restrained warping. Consideration is given to the fact that the loads at the rail head, as well as those exerted by the railpads at the rail foot, may not act through the centroid of the section. The response is evaluated for a harmonic load and the solution is obtained in the wavenumber domain. Results are presented as dispersion curves for free and supported rails and are validated with the aid of a Finite Element (FE) and a waveguide finite element (WFE) model. Closed form expressions are derived for the forced response, and validated against the WFE model. Track mobilities and decay rates are presented to assess the potential implications for rolling noise and the influence of the various sources of vertical-lateral coupling. Comparison is also made with measured data. Overall, the model presented performs very well, especially for the lateral vibration, although it does not contain the high frequency cross-section deformation modes. The most significant effects on the response are shown to be the inclusion of torsion and foundation eccentricity, which mainly affect the lateral response.

  17. Parametric validations of analytical lifetime estimates for radiation belt electron diffusion by whistler waves

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2013-04-01

    Full Text Available The lifetimes of electrons trapped in Earth's radiation belts can be calculated from quasi-linear pitch-angle diffusion by whistler-mode waves, provided that their frequency spectrum is broad enough and/or their average amplitude is not too large. Extensive comparisons between improved analytical lifetime estimates and full numerical calculations have been performed in a broad parameter range representative of a large part of the magnetosphere from L ~ 2 to 6. The effects of observed very oblique whistler waves are taken into account in both numerical and analytical calculations. Analytical lifetimes (and pitch-angle diffusion coefficients are found to be in good agreement with full numerical calculations based on CRRES and Cluster hiss and lightning-generated wave measurements inside the plasmasphere and Cluster lower-band chorus waves measurements in the outer belt for electron energies ranging from 100 keV to 5 MeV. Comparisons with lifetimes recently obtained from electron flux measurements on SAMPEX, SCATHA, SAC-C and DEMETER also show reasonable agreement.

  18. Simplified analytical model for thermal transfer in vertical hollow brick

    Energy Technology Data Exchange (ETDEWEB)

    Lorente, S [Lab. d` Etudes Thermiques et Mecaniques, INSA, UPS, Toulouse (France); Petit, M [Lab. d` Etudes Thermiques et Mecaniques, INSA, UPS, Toulouse (France); Javelas, R [Lab. d` Etudes Thermiques et Mecaniques, INSA, UPS, Toulouse (France)

    1996-12-01

    A modern building envelope has a lot of little cavities. Most of them are vertical with a high height to thickness ratio. We present here the conception of a software to determine heat transfer through terra-cotta bricks full of large vertical cavities. After a bibliographic study on convective heat transfer in such cavities, we made an analytical model based on Karman-Polhausen`s method for convection and on the radiosity method for radiative heat transfer. We used a test apparatus of a single cavity to determine the temperature field inside the cavity. Using these experimental results, we showed that the exchange was two-dimensional. We also realised heat flux measurements. Then we expose our theoretical study: We propose relations between central core temperatures and active face temperatures, then between outside and inside active face temperatures. We calculate convective superficial heat transfer because we noticed we have boundary layers along the active faces. We realise a heat flux balance between convective plus radiative heat transfer and conductive heat transfer, so we propose an algorithm to calculate global heat transfer through a single cavity. Finally, we extend our model to a whole hollow brick with lined-up cavities and propose an algorithm to calculate heat flux and thermal resistance with a good accuracy ({approx}7.5%) compared to previous experimental results. (orig.)

  19. Analytical model for an electrostatically actuated miniature diaphragm compressor

    International Nuclear Information System (INIS)

    Sathe, Abhijit A; Groll, Eckhard A; Garimella, Suresh V

    2008-01-01

    This paper presents a new analytical approach for quasi-static modeling of an electrostatically actuated diaphragm compressor that could be employed in a miniature scale refrigeration system. The compressor consists of a flexible circular diaphragm clamped at its circumference. A conformal chamber encloses the diaphragm completely. The membrane and the chamber surfaces are coated with metallic electrodes. A potential difference applied between the diaphragm and the chamber pulls the diaphragm toward the chamber surface progressively from the outer circumference toward the center. This zipping actuation reduces the volume available to the refrigerant gas, thereby increasing its pressure. A segmentation technique is proposed for analysis of the compressor by which the domain is divided into multiple segments for each of which the forces acting on the diaphragm are estimated. The pull-down voltage to completely zip each individual segment is thus obtained. The required voltage for obtaining a specific pressure rise in the chamber can thus be determined. Predictions from the model compare well with other simulation results from the literature, as well as to experimental measurements of the diaphragm displacement and chamber pressure rise in a custom-built setup

  20. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  1. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  2. Aerosol modelling and validation during ESCOMPTE 2001

    Science.gov (United States)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out

  3. Design of impact limiters of a bulk type B (U) . Trials of fall and validation of the analytical model In the design of a container for transportation of spent fuel, the impact limiters are a fundamental part for compliance with regulatory requirements; Diseno de los Limitadores de impacto de un Bulto Tipo B(U). Ensayos de Caida y validacion del Modelo Analitico

    Energy Technology Data Exchange (ETDEWEB)

    Garrido Quevedo, D.

    2013-07-01

    The aim is to confirm through real trials that the design and the results obtained through simulation conform to reality with a high degree of confidence... The combination of tests on scale models and the validation of the methods of calculation are necessary tools for the design of limiters impact a container of spent fuel transport.

  4. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  5. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  6. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  7. Theory of a spherical electrostatic probe in a continuum plasma: Analytical models

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    A simple physical model of the charge distribution surrounding a biased spherical probe in a quiescent plasma, suggested by the theory of Su and Lam, is used to rederive the probe current-voltage characteristic. The result is compared with that of a slightly different version due to Kiel and with the exact numerical results of Baum and Chapkis. It is shown that if the ratio of the probe radius to the Debye length of the plasma is greater than or of the order of unity, the model calculation is in excellent agreement with the exact results when the dimensionless probe voltage phi/sup asterisk//sub p/,=vertical-barephi/sub p//kTvertical-bar in standard notation, is greater than 10, for both thick and thin sheaths. The comparison also provides an assessment of the importance of various additional validity criteria encountered in analytical treatments of the problem

  8. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    Science.gov (United States)

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  9. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  10. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  11. A fast semi-analytical model for the slotted structure of induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    A fast, semi-analytical model for induction motors (IMs) is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous distribution of the magnetic flux density in

  12. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    Science.gov (United States)

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  13. Factor Analytic Validation of the Ford, Wolvin, and Chung Listening Competence Scale

    Science.gov (United States)

    Mickelson, William T.; Welch, S. A.

    2012-01-01

    This research begins to independently and quantitatively validate the Ford, Wolvin, and Chung (2000) Listening Competency Scale. Reliability and Confirmatory Factor analyses were conducted on two independent samples. The reliability estimates were found to be below those reported by Ford, Wolvin, and Chung (2000) and below acceptable levels for…

  14. Validation of the community radiative transfer model

    International Nuclear Information System (INIS)

    Ding Shouguo; Yang Ping; Weng Fuzhong; Liu Quanhua; Han Yong; Delst, Paul van; Li Jun; Baum, Bryan

    2011-01-01

    To validate the Community Radiative Transfer Model (CRTM) developed by the U.S. Joint Center for Satellite Data Assimilation (JCSDA), the discrete ordinate radiative transfer (DISORT) model and the line-by-line radiative transfer model (LBLRTM) are combined in order to provide a reference benchmark. Compared with the benchmark, the CRTM appears quite accurate for both clear sky and ice cloud radiance simulations with RMS errors below 0.2 K, except for clouds with small ice particles. In a computer CPU run time comparison, the CRTM is faster than DISORT by approximately two orders of magnitude. Using the operational MODIS cloud products and the European Center for Medium-range Weather Forecasting (ECMWF) atmospheric profiles as an input, the CRTM is employed to simulate the Atmospheric Infrared Sounder (AIRS) radiances. The CRTM simulations are shown to be in reasonably close agreement with the AIRS measurements (the discrepancies are within 2 K in terms of brightness temperature difference). Furthermore, the impact of uncertainties in the input cloud properties and atmospheric profiles on the CRTM simulations has been assessed. The CRTM-based brightness temperatures (BTs) at the top of the atmosphere (TOA), for both thin (τ 30) clouds, are highly sensitive to uncertainties in atmospheric temperature and cloud top pressure. However, for an optically thick cloud, the CRTM-based BTs are not sensitive to the uncertainties of cloud optical thickness, effective particle size, and atmospheric humidity profiles. On the contrary, the uncertainties of the CRTM-based TOA BTs resulting from effective particle size and optical thickness are not negligible in an optically thin cloud.

  15. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  16. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    International Nuclear Information System (INIS)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-01-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  17. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  18. Validation of an analytical method for determining halothane in urine as an instrument for evaluating occupational exposure

    International Nuclear Information System (INIS)

    Gonzalez Chamorro, Rita Maria; Jaime Novas, Arelis; Diaz Padron, Heliodora

    2010-01-01

    The occupational exposure to harmful substances may impose the apparition of determined significative changes in the normal physiology of the organism when the adequate security measures are not taken in time in a working place where the risk may be present. Among the chemical risks that may affect the workers' health are the inhalable anesthetic agents. With the objective to take the first steps for the introduction of an epidemiological surveillance system to this personnel, an analytical method for determining this anesthetic in urine was validated with the instrumental conditions created in our laboratory. To carry out this validation the following parameters were taken into account: specificity, lineament, precision, accuracy, detection limit and quantification limit, and the uncertainty of the method was calculated. In the validation procedure it was found that the technique is specific and precise, the detection limit was of 0,118 μg/L, and of the quantification limit of 0,354 μg/L. The global uncertainty was of 0,243, and the expanded of 0,486. The validated method, together with the posterior introduction of the biological exposure limits, will serve as an auxiliary means of diagnosis which will allow us a periodical control of the personnel exposure

  19. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  20. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  1. Validating clustering of molecular dynamics simulations using polymer models

    Directory of Open Access Journals (Sweden)

    Phillips Joshua L

    2011-11-01

    Full Text Available Abstract Background Molecular dynamics (MD simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our

  2. Analytical models for total dose ionization effects in MOS devices.

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Phillip Montgomery; Bogdan, Carolyn W.

    2008-08-01

    MOS devices are susceptible to damage by ionizing radiation due to charge buildup in gate, field and SOI buried oxides. Under positive bias holes created in the gate oxide will transport to the Si / SiO{sub 2} interface creating oxide-trapped charge. As a result of hole transport and trapping, hydrogen is liberated in the oxide which can create interface-trapped charge. The trapped charge will affect the threshold voltage and degrade the channel mobility. Neutralization of oxidetrapped charge by electron tunneling from the silicon and by thermal emission can take place over long periods of time. Neutralization of interface-trapped charge is not observed at room temperature. Analytical models are developed that account for the principal effects of total dose in MOS devices under different gate bias. The intent is to obtain closed-form solutions that can be used in circuit simulation. Expressions are derived for the aging effects of very low dose rate radiation over long time periods.

  3. Small-scale engagement model with arrivals: analytical solutions

    International Nuclear Information System (INIS)

    Engi, D.

    1977-04-01

    This report presents an analytical model of small-scale battles. The specific impetus for this effort was provided by a need to characterize hypothetical battles between guards at a nuclear facility and their potential adversaries. The solution procedure can be used to find measures of a number of critical parameters; for example, the win probabilities and the expected duration of the battle. Numerical solutions are obtainable if the total number of individual combatants on the opposing sides is less than 10. For smaller force size battles, with one or two combatants on each side, symbolic solutions can be found. The symbolic solutions express the output parameters abstractly in terms of symbolic representations of the input parameters while the numerical solutions are expressed as numerical values. The input parameters are derived from the probability distributions of the attrition and arrival processes. The solution procedure reduces to solving sets of linear equations that have been constructed from the input parameters. The approach presented in this report does not address the problems associated with measuring the inputs. Rather, this report attempts to establish a relatively simple structure within which small-scale battles can be studied

  4. An analytical study of various telecomminication networks using Markov models

    International Nuclear Information System (INIS)

    Ramakrishnan, M; Jayamani, E; Ezhumalai, P

    2015-01-01

    The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model. (paper)

  5. Analytical and numerical models of transport in porous cementitious materials

    International Nuclear Information System (INIS)

    Garboczi, E.J.; Bentz, D.P.

    1990-01-01

    Most chemical and physical processes that degrade cementitious materials are dependent on an external source of either water or ions or both. Understanding the rates of these processes at the microstructural level is necessary in order to develop a sound scientific basis for the prediction and control of the service life of cement-based materials, especially for radioactive-waste containment materials that are required to have service lives on the order of hundreds of years. An important step in developing this knowledge is to understand how transport coefficients, such as diffusivity and permeability, depend on the pore structure. Fluid flow under applied pressure gradients and ionic diffusion under applied concentration gradients are important transport mechanisms that take place in the pore space of cementitious materials. This paper describes: (1) a new analytical percolation-theory-based equation for calculating the permeability of porous materials, (2) new computational methods for computing effective diffusivities of microstructural models or digitized images of actual porous materials, and (3) a new digitized-image mercury intrusion simulation technique

  6. Calculations for Adjusting Endogenous Biomarker Levels During Analytical Recovery Assessments for Ligand-Binding Assay Bioanalytical Method Validation.

    Science.gov (United States)

    Marcelletti, John F; Evans, Cindy L; Saxena, Manju; Lopez, Adriana E

    2015-07-01

    It is often necessary to adjust for detectable endogenous biomarker levels in spiked validation samples (VS) and in selectivity determinations during bioanalytical method validation for ligand-binding assays (LBA) with a matrix like normal human serum (NHS). Described herein are case studies of biomarker analyses using multiplex LBA which highlight the challenges associated with such adjustments when calculating percent analytical recovery (%AR). The LBA test methods were the Meso Scale Discovery V-PLEX® proinflammatory and cytokine panels with NHS as test matrix. The NHS matrix blank exhibited varied endogenous content of the 20 individual cytokines before spiking, ranging from undetectable to readily quantifiable. Addition and subtraction methods for adjusting endogenous cytokine levels in %AR calculations are both used in the bioanalytical field. The two methods were compared in %AR calculations following spiking and analysis of VS for cytokines having detectable endogenous levels in NHS. Calculations for %AR obtained by subtracting quantifiable endogenous biomarker concentrations from the respective total analytical VS values yielded reproducible and credible conclusions. The addition method, in contrast, yielded %AR conclusions that were frequently unreliable and discordant with values obtained with the subtraction adjustment method. It is shown that subtraction of assay signal attributable to matrix is a feasible alternative when endogenous biomarkers levels are below the limit of quantitation, but above the limit of detection. These analyses confirm that the subtraction method is preferable over that using addition to adjust for detectable endogenous biomarker levels when calculating %AR for biomarker LBA.

  7. Analytical Validation of a New Enzymatic and Automatable Method for d-Xylose Measurement in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Israel Sánchez-Moreno

    2017-01-01

    Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.

  8. Simultaneous determination of renal function biomarkers in urine using a validated paper-based microfluidic analytical device.

    Science.gov (United States)

    Rossini, Eduardo Luiz; Milani, Maria Izabel; Carrilho, Emanuel; Pezza, Leonardo; Pezza, Helena Redigolo

    2018-01-02

    In this paper, we describe a validated paper-based microfluidic analytical device for the simultaneous quantification of two important biomarkers of renal function in urine. This paper platform provides an inexpensive, simple, and easy to use colorimetric method for the quantification of creatinine (CRN) and uric acid (UA) in urine samples. The microfluidic paper-based analytical device (μPAD) consists of a main channel with three identical arms, each containing a circular testing zone and a circular uptake zone. Creatinine detection is based on the Jaffé reaction, in which CRN reacts with picrate to form an orange-red product. Uric acid quantification is based on the reduction of Fe 3+ to Fe 2+ by UA, which is detected in a colorimetric reaction using 1,10-phenanthroline. Under optimum conditions, obtained through chemometrics, the concentrations of the analytes showed good linear correlations with the effective intensities, and the method presented satisfactory repeatability. The limits of detection and the linear ranges, respectively, were 15.7 mg L -1 and 50-600 mg L -1 for CRN and 16.5 mg L -1 and 50-500 mg L -1 for UA. There were no statistically significant differences between the results obtained using the μPAD and a chromatographic comparative method (Student's t-test at 95% confidence level). Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Experimental validation of TASS/SMR-S critical flow model for the integral reactor SMART

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Si Won; Ra, In Sik; Kim, Kun Yeup [ACT Co., Daejeon (Korea, Republic of); Chung, Young Jong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    An advanced integral PWR, SMART (System- Integrated Modular Advanced ReacTor) is being developed in KAERI. It has a compact size and a relatively small power rating (330MWt) compared to a conventional reactor. Because new concepts are applied to SMART, an experimental and analytical validation is necessary for the safety evaluation of SMART. The analytical safety validation is being accomplished by a safety analysis code for an integral reactor, TASS/SMR-S developed by KAERI. TASS/SMR-S uses a lumped parameter one dimensional node and path modeling for the thermal hydraulic calculation and it uses point kinetics for the reactor power calculation. It has models for a general usage such as a core heat transfer model, a wall heat structure model, a critical flow model, component models, and it also has many SMART specific models such as an once through helical coiled steam generator model, and a condensate heat transfer model. To ensure that the TASS/SMR-S code has the calculation capability for the safety evaluation of SMART, the code should be validated for the specific models with the separate effect test experimental results. In this study, TASS/SMR-S critical flow model is evaluated as compared with SMD (Super Moby Dick) experiment

  10. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  11. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  12. Development and testing of analytical models for the pebble bed type HTRs

    International Nuclear Information System (INIS)

    Huda, M.Q.; Obara, T.

    2008-01-01

    The pebble bed type gas cooled high temperature reactor (HTR) appears to be a good candidate for the next generation nuclear reactor technology. These reactors have unique characteristics in terms of the randomness in geometry, and require special techniques to analyze their systems. This study includes activities concerning the testing of computational tools and the qualification of models. Indeed, it is essential that the validated analytical tools be available to the research community. From this viewpoint codes like MCNP, ORIGEN and RELAP5, which have been used in nuclear industry for many years, are selected to identify and develop new capabilities needed to support HTR analysis. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP. The coupled MCNP-ORIGEN code is used to estimate the burnup and the refuelling scheme. Results obtained from Monte Carlo analysis are interfaced with RELAP5 to analyze the thermal hydraulics and safety characteristics of the reactor. New models and methodologies are developed for several past and present experimental and prototypical facilities that were based on HTR pebble bed concepts. The calculated results are compared with available experimental data and theoretical evaluations showing very good agreement. The ultimate goal of the validation of the computer codes for pebble bed HTR applications is to acquire and reinforce the capability of these general purpose computer codes for performing HTR core design and optimization studies

  13. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    Science.gov (United States)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  14. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  15. Establishment of Approximate Analytical Model of Oil Film Force for Finite Length Tilting Pad Journal Bearings

    Directory of Open Access Journals (Sweden)

    Yongliang Wang

    2015-01-01

    Full Text Available Tilting pad bearings offer unique dynamic stability enabling successful deployment of high-speed rotating machinery. The model of dynamic stiffness, damping, and added mass coefficients is often used for rotordynamic analyses, and this method does not suffice to describe the dynamic behaviour due to the nonlinear effects of oil film force under larger shaft vibration or vertical rotor conditions. The objective of this paper is to present a nonlinear oil force model for finite length tilting pad journal bearings. An approximate analytic oil film force model was established by analysing the dynamic characteristic of oil film of a single pad journal bearing using variable separation method under the dynamic π oil film boundary condition. And an oil film force model of a four-tilting-pad journal bearing was established by using the pad assembly technique and considering pad tilting angle. The validity of the model established was proved by analyzing the distribution of oil film pressure and the locus of journal centre for tilting pad journal bearings and by comparing the model established in this paper with the model established using finite difference method.

  16. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  17. Analytical and Experimental Study for Validation of the Device to Confine BN Reactor Melted Fuel

    International Nuclear Information System (INIS)

    Rogozhkin, S.; Osipov, S.; Sobolev, V.; Shepelev, S.; Kozhaev, A.; Mavrin, M.; Ryabov, A.

    2013-01-01

    To validate the design and confirm the design characteristics of the special retaining device (core catcher) used for protection of BN reactor vessel in the case of a severe beyond-design basis accident with core melting, computational and experimental studies were carried out. The Tray test facility that uses water as coolant was developed and fabricated by OKBM; experimental studies were performed. To verify the methodical approach used for the computational study, experimental results obtained in the Tray test facility were compared with numerical simulation results obtained by the STAR-CCM+ CFD code

  18. Analytical Method Development and Validation of Solifenacin in Pharmaceutical Dosage Forms by RP-HPLC

    OpenAIRE

    Shaik, Rihana Parveen; Puttagunta, Srinivasa Babu; Kothapalli Bannoth, Chandrasekar; Challa, Bala Sekhara Reddy

    2014-01-01

    A new, accurate, precise, and robust HPLC method was developed and validated for the determination of solifenacin in tablet dosage form. The chromatographic separation was achieved on an Inertsil ODS 3V C18 (150 mm × 4.6 mm, 5 μm) stationary phase maintained at ambient temperature with a mobile phase combination of monobasic potassium phosphate (pH 3.5) containing 0.1% triethylamine and methanol (gradient mode) at a flow rate of 1.5 mL/min, and the detection was carried out by using UV detect...

  19. Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models

    International Nuclear Information System (INIS)

    Trizac, Emmanuel; Aubouy, Miguel; Bocquet, Lyderic

    2003-01-01

    We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model

  20. Theoretical and Experimental Study of Optical Coherence Tomography (OCT) Signals Using an Analytical Transport Model

    International Nuclear Information System (INIS)

    Vazquez Villa, A.; Delgado Atencio, J. A.; Vazquez y Montiel, S.; Cunill Rodriguez, M.; Martinez Rodriguez, A. E.; Ramos, J. Castro; Villanueva, A.

    2010-01-01

    Optical coherence tomography (OCT) is a non-invasive low coherent interferometric technique that provides cross-sectional images of turbid media. OCT is based on the classical Michelson interferometer where the mirror of the reference arm is oscillating and the signal arm contains a biological sample. In this work, we analyzed theoretically the heterodyne optical signal adopting the so called extended Huygens-Fresnel principle (EHFP). We use simulated OCT images with known optical properties to test an algorithm developed by ourselves to recover the scattering coefficient and we recovered the scattering coefficient with a relative error less than 5% for noisy signals. In addition, we applied this algorithm to OCT images from phantoms of known optical properties; in this case curves were indistinguishable. A revision of the validity of the analytical model applied to our system should be done.

  1. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  2. "INTRODUCING A FULL VALIDATED ANALYTICAL PROCEDURE AS AN OFFICIAL COMPENDIAL METHOD FOR FENTANYL TRANSDERMAL PATCHES"

    Directory of Open Access Journals (Sweden)

    Amir Mehdizadeh

    2005-04-01

    Full Text Available A simple, sensitive and specific HPLC method and also a simple and fast extraction procedure were developed for quantitative analysis of fentanyl transdermal patches. Chloroform, methanol and ethanol were used as extracting solvents with recovery percent of 92.1, 94.3 and 99.4% respectively. Fentanyl was extracted with ethanol and the eluted fentanyl through the C18 column was monitored by UV detection at 230 nm. The linearity was at the range of 0.5-10 µg/mL with correlation coefficient (r2 of 0.9992. Both intra and inter-day accuracy and precision were within acceptable limits. The detection limit (DL and quantitation limit (QL were 0.15 and 0.5 µg/mL, respectively. Other validation characteristics such as selectivity, robustness and ruggedness were evaluated. Following method validation, a system suitability test (SST including capacity factor (k´, plate number (N, tailing factor (T, and RSD was defined for routine test.

  3. Validation of the Analytical Method for the Determination of Flavonoids in Broccoli

    Directory of Open Access Journals (Sweden)

    Tuszyńska Magdalena

    2014-09-01

    Full Text Available A simple, accurate and selective HPLC method was developed and validated for determination of quercetin and kaempferol, which are the main flavonols in broccoli. The separation was achieved on a reversed-phase C18 column using a mobile phase composed of methanol/water (60/40 and phosphoric acid 0.2% at a flow rate of 1.0 ml min-1. The detection was carried out on a DAD detector at 370 nm. This method was validated according to the requirements for new methods, which include selectivity, linearity, precision, accuracy, limit of detection and limit of quantitation. The current method demonstrates good linearity, with R2 > 0.99. The recovery is within 98.07-102.15% and 97.92-101.83% for quercetin and kaempferol, respectively. The method is selective, in that quercetin and kaempferol are well separated from other compounds of broccoli with good resolution. The low limit of detection and limit of quantitation of quercetin and kaempferol enable the detection and quantitation of these flavonoids in broccoli at low con–centrations.

  4. Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds

    Science.gov (United States)

    Akyurek, Bengu Ozge

    Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.

  5. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  6. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  7. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  8. An analytic current-voltage model for quasi-ballistic III-nitride high electron mobility transistors

    Science.gov (United States)

    Li, Kexin; Rakheja, Shaloo

    2018-05-01

    We present an analytic model to describe the DC current-voltage (I-V) relationship in scaled III-nitride high electron mobility transistors (HEMTs) in which transport within the channel is quasi-ballistic in nature. Following Landauer's transport theory and charge calculation based on two-dimensional electrostatics that incorporates negative momenta states from the drain terminal, an analytic expression for current as a function of terminal voltages is developed. The model interprets the non-linearity of access regions in non-self-aligned HEMTs. Effects of Joule heating with temperature-dependent thermal conductivity are incorporated in the model in a self-consistent manner. With a total of 26 input parameters, the analytic model offers reduced empiricism compared to existing GaN HEMT models. To verify the model, experimental I-V data of InAlN/GaN with InGaN back-barrier HEMTs with channel lengths of 42 and 105 nm are considered. Additionally, the model is validated against numerical I-V data obtained from DC hydrodynamic simulations of an unintentionally doped AlGaN-on-GaN HEMT with 50-nm gate length. The model is also verified against pulsed I-V measurements of a 150-nm T-gate GaN HEMT. Excellent agreement between the model and experimental and numerical results for output current, transconductance, and output conductance is demonstrated over a broad range of bias and temperature conditions.

  9. Comparison of the effectiveness of analytical wake models for wind farm with constant and variable hub heights

    International Nuclear Information System (INIS)

    Wang, Longyan; Tan, Andy C.C.; Cholette, Michael; Gu, Yuantong

    2016-01-01

    Highlights: • The effectiveness of three analytical wake models is studied. • The results of the analytical wake models are compared with the CFD simulations. • The results of CFD simulation are verified by comparison to the offshore wind farm observation data. • The onshore wind farm with both constant and different hub height turbines are analyzed. • PARK model is able to predict the total wind farm power production well with tuned surface roughness value. - Abstract: Extensive power losses of wind farm have been witnessed due to the wake interactions between wind turbines. By applying analytical wake models which describe the wind speed deficits in the wake quantitatively, the power losses can be regained to a large extent through wind farm layout optimization, and this has been extensively reported in literature. Nevertheless, the effectiveness of the analytical wake models in predicting the wind farm power production have rarely been studied and compared for wind farm with both constant and variable wind turbine hub heights. In this study, the effectiveness of three different analytical wake models (PARK model, Larsen model and B-P model) is thoroughly compared over a wide range of wake properties. After the validation with the observation data from offshore wind farm, CFD simulations are used to verify the effectiveness of the analytical wake models for an onshore wind farm. The results show that when using the PARK model the surface roughness value (z 0 ) must be carefully tuned to achieve good performance in predicting the wind farm power production. For the other two analytical wake models, their effectiveness varies depending on the situation of wind farm (offshore or onshore) and the wind turbine hub heights (constant or variable). It was found that the results of B-P model agree well with the CFD simulations for offshore wind farm, but not for the onshore wind farm. The Larsen model is more accurate for the wind farm with variable wind turbine

  10. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  11. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Validation of an analytical method applicable to study of 1 mg/mL oral Risperidone solution stability

    International Nuclear Information System (INIS)

    Abreu Alvarez, Maikel; Garcia Penna, Caridad Margarita; Martinez Miranda, Lissette

    2010-01-01

    A validated analytical method by high-performance liquid chromatography (HPLC) was applicable to study of 1 mg/mL Risperidone oral solution stability. The above method was linear, accurate, specific and exact. A stability study of the 1 mg/mL Risperidone oral solution was developed determining its expiry date. The shelf life study was conducted for 24 months at room temperature; whereas the accelerated stability study was conducted with product under influence of humidity and temperature; analysis was made during 3 months. Formula fulfilled the quality specifications described in Pharmacopeia. The results of stability according to shelf life after 24 months showed that the product maintains the parameters determining its quality during this time and in accelerated studies there was not significant degradation (p> 0.05) in the product. Under mentioned conditions expiry date was of 2 years

  15. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    Science.gov (United States)

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  16. Analytical validation of a flow cytometric protocol for quantification of platelet microparticles in dogs.

    Science.gov (United States)

    Cremer, Signe E; Krogh, Anne K H; Hedström, Matilda E K; Christiansen, Liselotte B; Tarnow, Inge; Kristensen, Annemarie T

    2018-06-01

    Platelet microparticles (PMPs) are subcellular procoagulant vesicles released upon platelet activation. In people with clinical diseases, alterations in PMP concentrations have been extensively investigated, but few canine studies exist. This study aims to validate a canine flow cytometric protocol for PMP quantification and to assess the influence of calcium on PMP concentrations. Microparticles (MP) were quantified in citrated whole blood (WB) and platelet-poor plasma (PPP) using flow cytometry. Anti-CD61 antibody and Annexin V (AnV) were used to detect platelets and phosphatidylserine, respectively. In 13 healthy dogs, CD61 + /AnV - concentrations were analyzed with/without a calcium buffer. CD61 + /AnV - , CD61 + /AnV + , and CD61 - /AnV + MP quantification were validated in 10 healthy dogs. The coefficient of variation (CV) for duplicate (intra-assay) and parallel (inter-assay) analyses and detection limits (DLs) were calculated. CD61 + /AnV - concentrations were higher in calcium buffer; 841,800 MP/μL (526,000-1,666,200) vs without; 474,200 MP/μL (278,800-997,500), P < .05. In WB, PMP were above DLs and demonstrated acceptable (<20%) intra-assay and inter-assay CVs in 9/10 dogs: 1.7% (0.5-8.9) and 9.0% (0.9-11.9), respectively, for CD61 + /AnV - and 2.4% (0.2-8.7) and 7.8% (0.0-12.8), respectively, for CD61 + /AnV + . Acceptable CVs were not seen for the CD61 - /AnV + MP. In PPP, quantifications were challenged by high inter-assay CV, overlapping DLs and hemolysis and lipemia interfered with quantification in 5/10 dogs. Calcium induced higher in vitro PMP concentrations, likely due to platelet activation. PMP concentrations were reliably quantified in WB, indicating the potential for clinical applications. PPP analyses were unreliable due to high inter-CV and DL overlap, and not obtainable due to hemolysis and lipemia interference. © 2018 American Society for Veterinary Clinical Pathology.

  17. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  18. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Nieves-Chinchilla, T. [Catholic University of America, Washington, DC (United States); Linton, M. G. [Space Science Division, Naval Research Laboratory, Washington, DC (United States); Hidalgo, M. A. [Dept. de Fisica, UAH, Alcala de Henares, Madrid (Spain); Vourlidas, A. [The Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States); Savani, N. P.; Szabo, A. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Farrugia, C.; Yu, W., E-mail: Teresa.Nieves@nasa.gov [Space Science Center and Department of Physics, University of New Hampshire, Durham, NH (United States)

    2016-05-20

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  19. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    International Nuclear Information System (INIS)

    Nieves-Chinchilla, T.; Linton, M. G.; Hidalgo, M. A.; Vourlidas, A.; Savani, N. P.; Szabo, A.; Farrugia, C.; Yu, W.

    2016-01-01

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  20. An analytical model of a curved beam with a T shaped cross section

    Science.gov (United States)

    Hull, Andrew J.; Perez, Daniel; Cox, Donald L.

    2018-03-01

    This paper derives a comprehensive analytical dynamic model of a closed circular beam that has a T shaped cross section. The new model includes in-plane and out-of-plane vibrations derived using continuous media expressions which produces results that have a valid frequency range above those available from traditional lumped parameter models. The web is modeled using two-dimensional elasticity equations for in-plane motion and the classical flexural plate equation for out-of-plane motion. The flange is modeled using two sets of Donnell shell equations: one for the left side of the flange and one for the right side of the flange. The governing differential equations are solved with unknown wave propagation coefficients multiplied by spatial domain and time domain functions which are inserted into equilibrium and continuity equations at the intersection of the web and flange and into boundary conditions at the edges of the system resulting in 24 algebraic equations. These equations are solved to yield the wave propagation coefficients and this produces a solution to the displacement field in all three dimensions. An example problem is formulated and compared to results from finite element analysis.

  1. Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms

    Science.gov (United States)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando

    2018-01-01

    As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.

  2. Ethics, Big Data, and Analytics: A Model for Application.

    OpenAIRE

    Willis, James E, III

    2013-01-01

    The use of big data and analytics to predict student success presents unique ethical questions for higher education administrators relating to the nature of knowledge; in education, "to know" entails an obligation to act on behalf of the student. The Potter Box framework can help administrators address these questions and provide a framework for action.

  3. An analytical model for soil-atmosphere feedback

    NARCIS (Netherlands)

    Schaefli, B.; Van der Ent, R.J.; Woods, R.; Savenije, H.H.G.

    2012-01-01

    Soil-atmosphere feedback is a key for understanding the hydrological cycle and the direction of potential system changes. This paper presents an analytical framework to study the interplay between soil and atmospheric moisture, using as input only the boundary conditions at the upstream end of

  4. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection

    Science.gov (United States)

    Cross, Robert W.; Boisen, Matthew L.; Millett, Molly M.; Nelson, Diana S.; Oottamasathien, Darin; Hartnett, Jessica N.; Jones, Abigal B.; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A.; Fusco, Marnie L.; Abelson, Dafna M.; Oda, Shunichiro; Brown, Bethany L.; Pham, Ha; Rowland, Megan M.; Agans, Krystle N.; Geisbert, Joan B.; Heinrich, Megan L.; Kulakosky, Peter C.; Shaffer, Jeffrey G.; Schieffelin, John S.; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M.; Wilson, Russell B.; Saphire, Erica Ollmann; Pitts, Kelly R.; Khan, Sheik Humarr; Grant, Donald S.; Geisbert, Thomas W.; Branco, Luis M.; Garry, Robert F.

    2016-01-01

    Background. Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013–2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases. Methods. Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance. Results. The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription–polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 105–9.0 × 108 genomes/mL. Conclusions. The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. PMID:27587634

  5. Analytical method development and validation for quantification of uranium by Fourier Transform Infrared Spectroscopy (FTIR) for routine quality control analysis

    International Nuclear Information System (INIS)

    Pereira, Elaine; Silva, Ieda de S.; Gomide, Ricardo G.; Pires, Maria Aparecida F.

    2015-01-01

    This work presents a low cost, simple and new methodology for direct determination uranium in different matrices uranium: organic phase (UO 2 (NO 3 ) 2 .2TBP - uranyl nitrate complex) and aqueous phase (UO 2 (NO 3 ) 2 - NTU - uranyl nitrate), based on Fourier Transform Infrared spectroscopy (FTIR) using KBr pellets technique. The analytical validation is essential to define if a developed methodology is completely adjusted to the objectives that it is destined and is considered one of the main instruments of quality control. The parameters used in the validation process were: selectivity, linearity, limits of detection (LD) and quantitation (LQ), precision (repeatability and intermediate precision), accuracy and robustness. The method for uranium in organic phase (UO 2 (NO 3 ) 2 .2TBP in hexane/embedded in KBr) was linear (r=0.9989) over the range of 1.0 g L -1 a 14.3 g L -1 , LD were 92.1 mg L -1 and LQ 113.1 mg L -1 , precision (RSD < 1.6% and p-value < 0.05), accurate (recovery of 100.1% - 102.9%). The method for uranium aqueous phase (UO 2 (NO 3 )2/embedded in KBr) was linear (r=0.9964) over the range of 5.4 g L -1 a 51.2 g L -1 , LD were 835 mg L -1 and LQ 958 mg L -1 , precision (RSD < 1.0% and p-value < 0.05), accurate (recovery of 99.1% - 102.0%). The FTIR method is robust regarding most of the variables analyzed, as the difference between results obtained under nominal and modified conditions were lower than the critical value for all analytical parameters studied. Some process samples were analyzed in FTIR and compared with gravimetric and x ray fluorescence (XRF) analyses showing similar results in all three methods. The statistical tests (Student-t and Fischer) showed that the techniques are equivalent. (author)

  6. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection.

    Science.gov (United States)

    Cross, Robert W; Boisen, Matthew L; Millett, Molly M; Nelson, Diana S; Oottamasathien, Darin; Hartnett, Jessica N; Jones, Abigal B; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A; Fusco, Marnie L; Abelson, Dafna M; Oda, Shunichiro; Brown, Bethany L; Pham, Ha; Rowland, Megan M; Agans, Krystle N; Geisbert, Joan B; Heinrich, Megan L; Kulakosky, Peter C; Shaffer, Jeffrey G; Schieffelin, John S; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M; Wilson, Russell B; Saphire, Erica Ollmann; Pitts, Kelly R; Khan, Sheik Humarr; Grant, Donald S; Geisbert, Thomas W; Branco, Luis M; Garry, Robert F

    2016-10-15

     Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013-2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases.  Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance.  The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription-polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 10 5 -9.0 × 10 8 genomes/mL.  The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  7. Analytical validation of a reference laboratory ELISA for the detection of feline leukemia virus p27 antigen.

    Science.gov (United States)

    Buch, Jesse S; Clark, Genevieve H; Cahill, Roberta; Thatcher, Brendon; Smith, Peter; Chandrashekar, Ramaswamy; Leutenegger, Christian M; O'Connor, Thomas P; Beall, Melissa J

    2017-09-01

    Feline leukemia virus (FeLV) is an oncogenic retrovirus of cats. Immunoassays for the p27 core protein of FeLV aid in the detection of FeLV infections. Commercial microtiter-plate ELISAs have rapid protocols and visual result interpretation, limiting their usefulness in high-throughput situations. The purpose of our study was to validate the PetChek FeLV 15 ELISA, which is designed for the reference laboratory, and incorporates sequential, orthogonal screening and confirmatory protocols. A cutoff for the screening assay was established with 100% accuracy using 309 feline samples (244 negative, 65 positive) defined by the combined results of FeLV PCR and an independent reference p27 antigen ELISA. Precision of the screening assay was measured using a panel of 3 samples (negative, low-positive, and high-positive). The intra-assay coefficient of variation (CV) was 3.9-7.9%; the inter-assay CV was 6.0-8.6%. For the confirmatory assay, the intra-assay CV was 3.0-4.7%, and the inter-assay CV was 7.4-9.7%. The analytical sensitivity for p27 antigen was 3.7 ng/mL for inactivated whole FeLV and 1.2 ng/mL for purified recombinant FeLV p27. Analytical specificity was demonstrated based on the absence of cross-reactivity to related retroviruses. No interference was observed for samples containing added bilirubin, hemoglobin, or lipids. Based on these results, the new high-throughput design of the PetChek FeLV 15 ELISA makes it suitable for use in reference laboratory settings and maintains overall analytical performance.

  8. Analytical Validation of a Portable Mass Spectrometer Featuring Interchangeable, Ambient Ionization Sources for High Throughput Forensic Evidence Screening.

    Science.gov (United States)

    Lawton, Zachary E; Traub, Angelica; Fatigante, William L; Mancias, Jose; O'Leary, Adam E; Hall, Seth E; Wieland, Jamie R; Oberacher, Herbert; Gizzi, Michael C; Mulligan, Christopher C

    2017-06-01

    Forensic evidentiary backlogs are indicative of the growing need for cost-effective, high-throughput instrumental methods. One such emerging technology that shows high promise in meeting this demand while also allowing on-site forensic investigation is portable mass spectrometric (MS) instrumentation, particularly that which enables the coupling to ambient ionization techniques. While the benefits of rapid, on-site screening of contraband can be anticipated, the inherent legal implications of field-collected data necessitates that the analytical performance of technology employed be commensurate with accepted techniques. To this end, comprehensive analytical validation studies are required before broad incorporation by forensic practitioners can be considered, and are the focus of this work. Pertinent performance characteristics such as throughput, selectivity, accuracy/precision, method robustness, and ruggedness have been investigated. Reliability in the form of false positive/negative response rates is also assessed, examining the effect of variables such as user training and experience level. To provide flexibility toward broad chemical evidence analysis, a suite of rapidly-interchangeable ion sources has been developed and characterized through the analysis of common illicit chemicals and emerging threats like substituted phenethylamines. Graphical Abstract ᅟ.

  9. Analytical Validation of a Portable Mass Spectrometer Featuring Interchangeable, Ambient Ionization Sources for High Throughput Forensic Evidence Screening

    Science.gov (United States)

    Lawton, Zachary E.; Traub, Angelica; Fatigante, William L.; Mancias, Jose; O'Leary, Adam E.; Hall, Seth E.; Wieland, Jamie R.; Oberacher, Herbert; Gizzi, Michael C.; Mulligan, Christopher C.

    2017-06-01

    Forensic evidentiary backlogs are indicative of the growing need for cost-effective, high-throughput instrumental methods. One such emerging technology that shows high promise in meeting this demand while also allowing on-site forensic investigation is portable mass spectrometric (MS) instrumentation, particularly that which enables the coupling to ambient ionization techniques. While the benefits of rapid, on-site screening of contraband can be anticipated, the inherent legal implications of field-collected data necessitates that the analytical performance of technology employed be commensurate with accepted techniques. To this end, comprehensive analytical validation studies are required before broad incorporation by forensic practitioners can be considered, and are the focus of this work. Pertinent performance characteristics such as throughput, selectivity, accuracy/precision, method robustness, and ruggedness have been investigated. Reliability in the form of false positive/negative response rates is also assessed, examining the effect of variables such as user training and experience level. To provide flexibility toward broad chemical evidence analysis, a suite of rapidly-interchangeable ion sources has been developed and characterized through the analysis of common illicit chemicals and emerging threats like substituted phenethylamines. [Figure not available: see fulltext.

  10. Validation of analytical method to quality control and the stability study of 0.025 % eyedrops Ketotiphen

    International Nuclear Information System (INIS)

    Troche Concepcion, Yenilen; Romero Diaz, Jacqueline Aylema; Garcia Penna, Caridad M

    2010-01-01

    The Ketotiphen eyedrop is prescribed to relief the signs and symptoms of allergic conjunctivitis due to its potent H 1a ntihistaminic effect showing some ability to inhibit the histamine release and other mediators in cases of mastocytosis. The aim of present paper was to develop and validate an analytical method for the high-performance liquid chromatography, to quality control and the stability studies of 0.025 % eyedrop Ketotiphen. Method was based on active principle separation by means of a Lichrosorb RP-18 (5 μm) (250 x 4 mm), with UV detection to 296 nm using a mobile phase including a non-gasified mixture of methanol:buffer-phosphate (75:25; pH 8.5) adding 1 mL of Isopropanol by each 1 000 mL of the previous mixture at a 1.2 mL/min flow velocity. The analytical method was linear, accurate, specific and exact during the study concentrations

  11. The role of validated analytical methods in JECFA drug assessments and evaluation for recommending MRLs.

    Science.gov (United States)

    Boison, Joe O

    2016-05-01

    The Joint Food and Agriculture Organization and World Health Organization (FAO/WHO) Expert Committee on Food Additives (JECFA) is one of three Codex committees tasked with applying risk analysis and relying on independent scientific advice provided by expert bodies organized by FAO/WHO when developing standards. While not officially part of the Codex Alimentarius Commission structure, JECFA provides independent scientific advice to the Commission and its specialist committees such as the Codex Committee on Residues of Veterinary Drugs in Foods (CCRVDF) in setting maximum residue limits (MRLs) for veterinary drugs. Codex methods of analysis (Types I, II, III, and IV) are defined in the Codex Procedural Manual as are criteria to be used for selecting methods of analysis. However, if a method is to be used under a single laboratory condition to support regulatory work, it must be validated according to an internationally recognized protocol and the use of the method must be embedded in a quality assurance system in compliance with ISO/IEC 17025:2005. This paper examines the attributes of the methods used to generate residue depletion data for drug registration and/or licensing and for supporting regulatory enforcement initiatives that experts consider to be useful and appropriate in their assessment of methods of analysis. Copyright © 2016 Her Majesty the Queen in Right of Canada. Drug Testing and Analysis © 2016 John Wiley & Sons, Ltd. © 2016 Her Majesty the Queen in Right of Canada. Drug Testing and Analysis © 2016 John Wiley & Sons, Ltd.

  12. Suspended sediment assessment by combining sound attenuation and backscatter measurements - analytical method and experimental validation

    Science.gov (United States)

    Guerrero, Massimo; Di Federico, Vittorio

    2018-03-01

    The use of acoustic techniques has become common for estimating suspended sediment in water environments. An emitted beam propagates into water producing backscatter and attenuation, which depend on scattering particles concentration and size distribution. Unfortunately, the actual particles size distribution (PSD) may largely affect the accuracy of concentration quantification through the unknown coefficients of backscattering strength, ks2, and normalized attenuation, ζs. This issue was partially solved by applying the multi-frequency approach. Despite this possibility, a relevant scientific and practical question remains regarding the possibility of using acoustic methods to investigate poorly sorted sediment in the spectrum ranging from clay to fine sand. The aim of this study is to investigate the possibility of combining the measurement of sound attenuation and backscatter to determine ζs for the suspended particles and the corresponding concentration. The proposed method is moderately dependent from actual PSD, thus relaxing the need of frequent calibrations to account for changes in ks2 and ζs coefficients. Laboratory tests were conducted under controlled conditions to validate this measurement technique. With respect to existing approaches, the developed method more accurately estimates the concentration of suspended particles ranging from clay to fine sand and, at the same time, gives an indication on their actual PSD.

  13. Validation of analytical methods for the quality control of Naproxen suppositories

    International Nuclear Information System (INIS)

    Rodriguez Hernandez, Yaslenis; Suarez Perez, Yania; Garcia Pulpeiro, Oscar; Hernandez Contreras, Orestes Yuniel

    2011-01-01

    The analysis methods that will be used for the quality control of the future Cuban-made Naproxen suppositories for adults and children were developed for the first time in this paper. One method based on direct ultraviolet spectrophotometry was put forward, which proved to be specific, linear, accurate and precise for the quality control of Naproxen suppositories, taking into account the presence of chromophore groups in their structure. Likewise, the direct semi-aqueous acid-base volumetry method aimed at the quality control of the Naproxen raw material was changed and adapted to the quality control of suppositories. On the basis of the validation process, there was demonstrated the adequate specificity of this method with respect to the formulation components, as well as its linearity, accuracy and precision in 1-3 mg/ml range. The final results were compared and no significant statistical differences among the replicas per each dose were found in both methods; therefore, both may be used in the quality control of Naproxen suppositories

  14. Use of reference materials for validating analytical methods. Applied to the determination of As, Co, Na, Hg, Se and Fe using neutron activation analysis

    International Nuclear Information System (INIS)

    Munoz, L; Andonie, O; Kohnenkamp, I

    2000-01-01

    The main purpose of an analytical laboratory is to provide reliable information on the nature and composition of the materials submitted for analysis. This purpose can only be attained if analytical methodologies that have the attributes of accuracy, precision, specificity and sensitivity, among others, are used. The process by which these attributes are evaluated is called validation of the analytical method. The Chilean Nuclear Energy Commission's Neutron Activation Analysis Laboratory is applying a quality guarantee program to ensure the quality of its analytical results, which aims, as well, to attain accreditation for some of its measurements. Validation of the analytical methodologies used is an essential part of applying this program. There are many forms of validation, from comparison with reference techniques to participation in inter-comparison rounds. Certified reference materials were used in this work in order to validate the application of neutron activation analysis in determining As, Co, Na, Hg, Se and Fe in shellfish samples. The use of reference materials was chosen because it is a simple option that easily detects sources of systematic errors. Neutron activation analysis is an instrumental analytical method that does not need chemical treatment and that is based on processes which take place in the nuclei of atoms, making the matrix effects unimportant and different biological reference materials can be used. The following certified reference materials were used for validating the method used: BCR human hair 397, NRCC dogfish muscle DORM-2, NRCC -dogfish liver DOLT-2, NIST - oyster tissue 1566, NIES - mussel 6 and BCR - tuna fish 464. The reference materials were analyzed using the procedure developed for the shellfish samples and the above-mentioned elements were determined. With the results obtained, the parameters of accuracy, precision, detection limit, quantification limit and uncertainty associated with the method were determined for each

  15. Analytic treatment of nuclear spin-lattice relaxation for diffusion in a cone model

    Science.gov (United States)

    Sitnitsky, A. E.

    2011-12-01

    We consider nuclear spin-lattice relaxation rate resulted from a diffusion equation for rotational wobbling in a cone. We show that the widespread point of view that there are no analytical expressions for correlation functions for wobbling in a cone model is invalid and prove that nuclear spin-lattice relaxation in this model is exactly tractable and amenable to full analytical description. The mechanism of relaxation is assumed to be due to dipole-dipole interaction of nuclear spins and is treated within the framework of the standard Bloemberger, Purcell, Pound-Solomon scheme. We consider the general case of arbitrary orientation of the cone axis relative the magnetic field. The BPP-Solomon scheme is shown to remain valid for systems with the distribution of the cone axes depending only on the tilt relative the magnetic field but otherwise being isotropic. We consider the case of random isotropic orientation of cone axes relative the magnetic field taking place in powders. Also we consider the cases of their predominant orientation along or opposite the magnetic field and that of their predominant orientation transverse to the magnetic field which may be relevant for, e.g., liquid crystals. Besides we treat in details the model case of the cone axis directed along the magnetic field. The latter provides direct comparison of the limiting case of our formulas with the textbook formulas for free isotropic rotational diffusion. The dependence of the spin-lattice relaxation rate on the cone half-width yields results similar to those predicted by the model-free approach.

  16. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  17. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  18. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    Science.gov (United States)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  19. Cost model validation: a technical and cultural approach

    Science.gov (United States)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  20. Analytical Model of Water Flow in Coal with Active Matrix

    Science.gov (United States)

    Siemek, Jakub; Stopa, Jerzy

    2014-12-01

    This paper presents new analytical model of gas-water flow in coal seams in one dimension with emphasis on interactions between water flowing in cleats and coal matrix. Coal as a flowing system, can be viewed as a solid organic material consisting of two flow subsystems: a microporous matrix and a system of interconnected macropores and fractures. Most of gas is accumulated in the microporous matrix, where the primary flow mechanism is diffusion. Fractures and cleats existing in coal play an important role as a transportation system for macro scale flow of water and gas governed by Darcy's law. The coal matrix can imbibe water under capillary forces leading to exchange of mass between fractures and coal matrix. In this paper new partial differential equation for water saturation in fractures has been formulated, respecting mass exchange between coal matrix and fractures. Exact analytical solution has been obtained using the method of characteristics. The final solution has very simple form that may be useful for practical engineering calculations. It was observed that the rate of exchange of mass between the fractures and the coal matrix is governed by an expression which is analogous to the Newton cooling law known from theory of heat exchange, but in present case the mass transfer coefficient depends not only on coal and fluid properties but also on time and position. The constant term of mass transfer coefficient depends on relation between micro porosity and macro porosity of coal, capillary forces, and microporous structure of coal matrix. This term can be expressed theoretically or obtained experimentally. W artykule zaprezentowano nowy model matematyczny przepływu wody i gazu w jednowymiarowej warstwie węglowej z uwzględnieniem wymiany masy między systemem szczelin i matrycą węglową. Węgiel jako system przepływowy traktowany jest jako układ o podwójnej porowatości i przepuszczalności, składający się z mikroporowatej matrycy węglowej oraz z

  1. Validation of Embedded System Verification Models

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    The result of a model-based requirements verification shows that the model of a system satisfies (or not) formalised system requirements. The verification result is correct only if the model represents the system adequately. No matter what modelling technique we use, what precedes the model

  2. IVIM: modeling, experimental validation and application to animal models

    International Nuclear Information System (INIS)

    Fournet, Gabrielle

    2016-01-01

    This PhD thesis is centered on the study of the IVIM ('Intravoxel Incoherent Motion') MRI sequence. This sequence allows for the study of the blood microvasculature such as the capillaries, arterioles and venules. To be sensitive only to moving groups of spins, diffusion gradients are added before and after the 180 degrees pulse of a spin echo (SE) sequence. The signal component corresponding to spins diffusing in the tissue can be separated from the one related to spins travelling in the blood vessels which is called the IVIM signal. These two components are weighted by f IVIM which represents the volume fraction of blood inside the tissue. The IVIM signal is usually modelled by a mono-exponential (ME) function and characterized by a pseudo-diffusion coefficient, D*. We propose instead a bi-exponential IVIM model consisting of a slow pool, characterized by F slow and D* slow corresponding to the capillaries as in the ME model, and a fast pool, characterized by F fast and D* fast, related to larger vessels such as medium-size arterioles and venules. This model was validated experimentally and more information was retrieved by comparing the experimental signals to a dictionary of simulated IVIM signals. The influence of the pulse sequence, the repetition time and the diffusion encoding time was also studied. Finally, the IVIM sequence was applied to the study of an animal model of Alzheimer's disease. (author) [fr

  3. Validation of a new analytical procedure for determination of residual solvents in [18F]FDG by gas chromatography

    International Nuclear Information System (INIS)

    Costa, Flávia M.; Costa, Cassiano L.S.; Silva, Juliana B.; Ferreira, Soraya M.Z.M.D.

    2017-01-01

    Fludeoxyglucose F 18 ([ 18 F]FDG) is the most used radiopharmaceutical for positron emission tomography, especially on oncology. Organic solvents such as ether, ethanol and acetonitrile might be used in the synthesis of [ 18 F]FDG; however, they might not be completely removed during purification steps. The determination of residual solvents in [ 18 F]FDG is required in the European Pharmacopoeia (EP) and the United States Pharmacopeia (USP) monographs. While the procedure described in the EP is quite general, the one described in the USP requires a long runtime (about 13 minutes). In this work a simple and fast (4-minute) analytical procedure was developed and validated for determination of residual solvents in [ 18 F]FDG. Analyses were carried out in a Perkin Elmer gas chromatograph equipped with a flame ionization detector. The separation was obtained on a 0.53-mm x 30 m fused-silica column. Validation included the evaluation of various parameters, such as: specificity, linearity and range, limits of detection and quantitation, precision (repeatability and intermediate precision), accuracy, and robustness. Results were found to be within acceptable limits, indicating the developed procedure is suitable for its intended application. Considering the short half-life of fluorine-18 (109.7 minutes), this new method could be a valuable alternative for routine quality control of [ 18 F]FDG. (author)

  4. Validation of a new analytical procedure for determination of residual solvents in [{sup 18}F]FDG by gas chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Flávia M.; Costa, Cassiano L.S.; Silva, Juliana B.; Ferreira, Soraya M.Z.M.D., E-mail: flaviabiomedica@yahoo.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (UPPR/CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Unidade de Pesquisa e Produção de Radiofármacos

    2017-07-01

    Fludeoxyglucose F 18 ([{sup 18}F]FDG) is the most used radiopharmaceutical for positron emission tomography, especially on oncology. Organic solvents such as ether, ethanol and acetonitrile might be used in the synthesis of [{sup 18}F]FDG; however, they might not be completely removed during purification steps. The determination of residual solvents in [{sup 18}F]FDG is required in the European Pharmacopoeia (EP) and the United States Pharmacopeia (USP) monographs. While the procedure described in the EP is quite general, the one described in the USP requires a long runtime (about 13 minutes). In this work a simple and fast (4-minute) analytical procedure was developed and validated for determination of residual solvents in [{sup 18}F]FDG. Analyses were carried out in a Perkin Elmer gas chromatograph equipped with a flame ionization detector. The separation was obtained on a 0.53-mm x 30 m fused-silica column. Validation included the evaluation of various parameters, such as: specificity, linearity and range, limits of detection and quantitation, precision (repeatability and intermediate precision), accuracy, and robustness. Results were found to be within acceptable limits, indicating the developed procedure is suitable for its intended application. Considering the short half-life of fluorine-18 (109.7 minutes), this new method could be a valuable alternative for routine quality control of [{sup 18}F]FDG. (author)

  5. Comparison and validation of dynamic characteristic analytical method for tubular heat exchanger

    International Nuclear Information System (INIS)

    Huang Qing; Xu Dinggeng; Chen Meng; Shen Rui

    2013-01-01

    In this study, the natural frequencies of Normal Residual Heat Removal Heat Exchangers are evaluated based on the beam and shell-beam finite element models. The corresponding results are compared and some discrepancies are observed. These discrepancies are analyzed in terms of the analysis of a cylindrical shell and the unreasonable treatment of boundary conditions is accordingly pointed out. The experimental data of the natural frequencies of heat exchangers used for Qinshan Phase Ⅰ Nuclear Power Plant are compared with the computational results from the shell-beam models for corresponding heat exchangers of C-2 program. The experimental and numerical results agree quite well, which implies that the shell-beam finite element simplification is applicable to the heat exchangers. The results indicate that the procedures introduced in this article apply to the dynamic analysis of other similar heat exchangers. (authors)

  6. Analytical one-dimensional frequency response and stability model for PWR nuclear power plants

    International Nuclear Information System (INIS)

    Hoeld, A.

    1975-01-01

    A dynamic model for PWR nuclear power plants is presented. The plant is assumed to consist of one-dimensional single-channel core, a counterflow once-through steam generator (represented by two nodes according to the nonboiling and boiling region) and the necessary connection coolant lines. The model describes analytically the frequency response behaviour of important parameters of such a plant with respect to perturbations in reactivity, subcooling or mass flow (both at the entrances to the reactor core and/or the secondary steam generator side), the perturbations in steam load or system pressure (on the secondary side of the steam generator). From corresponding 'open' loop considerations it can then be concluded - by applying the Nyquist criterion - upon the degree of the stability behaviour of the underlying system. Based on this theoretical model, a computer code named ADYPMO has been established. From the knowledge of the frequency response behaviour of such a system, the corresponding transient behaviour with respect to a stepwise or any other perturbation signal can also be calculated by applying an appropriate retransformation method, e.g. by using digital code FRETI. To demonstrate this procedure, a transient experimental curve measured during the pre-operational test period at the PWR nuclear power plant KKS Stade was recalculated using the combination ADYPMO-FRETI. Good agreement between theoretical calculations and experimental results give an insight into the validity and efficiency of the underlying theoretical model and the applied retransformation method. (Auth.)

  7. Analytical Solution of Interface Effect on the Strength of Combined Model Composed of Different Geologic Bodies

    Directory of Open Access Journals (Sweden)

    Zeng-hui Zhao

    2014-01-01

    Full Text Available According to the special combined structure of surrounding rock in western mining area of China, a micromechanical model with variable parameters containing contact interface was proposed firstly. Then, the derived stresses in coal and rock near the interface were analyzed on the basis of the harmonized strain relation, and the analytical solutions with respect to stress states near the interface were drawn up. The triaxial compressive strength of coal and rock was further determined in case the contact interface was in the horizontal position. Moreover, effects of stiffness ratio, interface angle, and stress level on the strength of two bodies near the contact area were expounded in detail. Results indicate that additional stresses which have significant effect on the strength of combined model are derived due to the adhesive effect of contact interface and lithological differences between geologic bodies located on both sides. The interface effect on the strength of combined body is most associated with the stiffness, interface angle, and the stress level. These conclusions are also basically valid for three-body model and even for the multibody model and lay important theory foundation to guide the stability study of soft strata composed of different geologic bodies.

  8. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.

    Science.gov (United States)

    Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher

    2017-08-01

    The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

  9. Analytical model and stability analysis of the leading edge spar of a passively morphing ornithopter wing.

    Science.gov (United States)

    Wissa, Aimy; Calogero, Joseph; Wereley, Norman; Hubbard, James E; Frecker, Mary

    2015-10-26

    This paper presents the stability analysis of the leading edge spar of a flapping wing unmanned air vehicle with a compliant spine inserted in it. The compliant spine is a mechanism that was designed to be flexible during the upstroke and stiff during the downstroke. Inserting a variable stiffness mechanism into the leading edge spar affects its structural stability. The model for the spar-spine system was formulated in terms of the well-known Mathieu's equation, in which the compliant spine was modeled as a torsional spring with a sinusoidal stiffness function. Experimental data was used to validate the model and results show agreement within 11%. The structural stability of the leading edge spar-spine system was determined analytically and graphically using a phase plane plot and Strutt diagrams. Lastly, a torsional viscous damper was added to the leading edge spar-spine model to investigate the effect of damping on stability. Results show that for the un-damped case, the leading edge spar-spine response was stable and bounded; however, there were areas of instability that appear for a range of spine upstroke and downstroke stiffnesses. Results also show that there exist a damping ratio between 0.2 and 0.5, for which the leading edge spar-spine system was stable for all values of spine upstroke and downstroke stiffnesses.

  10. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.

  11. Model-based Engineering for the Integration of Manufacturing Systems with Advanced Analytics

    OpenAIRE

    Lechevalier , David; Narayanan , Anantha; Rachuri , Sudarsan; Foufou , Sebti; Lee , Y Tina

    2016-01-01

    Part 3: Interoperability and Systems Integration; International audience; To employ data analytics effectively and efficiently on manufacturing systems, engineers and data scientists need to collaborate closely to bring their domain knowledge together. In this paper, we introduce a domain-specific modeling approach to integrate a manufacturing system model with advanced analytics, in particular neural networks, to model predictions. Our approach combines a set of meta-models and transformatio...

  12. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    Science.gov (United States)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  14. High Z neoclassical transport: Application and limitation of analytical formulae for modelling JET experimental parameters

    Science.gov (United States)

    Breton, S.; Casson, F. J.; Bourdelle, C.; Angioni, C.; Belli, E.; Camenen, Y.; Citrin, J.; Garbet, X.; Sarazin, Y.; Sertoli, M.; JET Contributors

    2018-01-01

    Heavy impurities, such as tungsten (W), can exhibit strongly poloidally asymmetric density profiles in rotating or radio frequency heated plasmas. In the metallic environment of JET, the poloidal asymmetry of tungsten enhances its neoclassical transport up to an order of magnitude, so that neoclassical convection dominates over turbulent transport in the core. Accounting for asymmetries in neoclassical transport is hence necessary in the integrated modeling framework. The neoclassical drift kinetic code, NEO [E. Belli and J. Candy, Plasma Phys. Controlled Fusion P50, 095010 (2008)], includes the impact of poloidal asymmetries on W transport. However, the computational cost required to run NEO slows down significantly integrated modeling. A previous analytical formulation to describe heavy impurity neoclassical transport in the presence of poloidal asymmetries in specific collisional regimes [C. Angioni and P. Helander, Plasma Phys. Controlled Fusion 56, 124001 (2014)] is compared in this work to numerical results from NEO. Within the domain of validity of the formula, the factor for reducing the temperature screening due to poloidal asymmetries had to be empirically adjusted. After adjustment, the modified formula can reproduce NEO results outside of its definition domain, with some limitations: When main ions are in the banana regime, the formula reproduces NEO results whatever the collisionality regime of impurities, provided that the poloidal asymmetry is not too large. However, for very strong poloidal asymmetries, agreement requires impurities in the Pfirsch-Schlüter regime. Within the JETTO integrated transport code, the analytical formula combined with the poloidally symmetric neoclassical code NCLASS [W. A. Houlberg et al., Phys. Plasmas 4, 3230 (1997)] predicts the same tungsten profile as NEO in certain cases, while saving a factor of one thousand in computer time, which can be useful in scoping studies. The parametric dependencies of the temperature

  15. Development of a decision analytic model to support decision making and risk communication about thrombolytic treatment.

    Science.gov (United States)

    McMeekin, Peter; Flynn, Darren; Ford, Gary A; Rodgers, Helen; Gray, Jo; Thomson, Richard G

    2015-11-11

    Individualised prediction of outcomes can support clinical and shared decision making. This paper describes the building of such a model to predict outcomes with and without intravenous thrombolysis treatment following ischaemic stroke. A decision analytic model (DAM) was constructed to establish the likely balance of benefits and risks of treating acute ischaemic stroke with thrombolysis. Probability of independence, (modified Rankin score mRS ≤ 2), dependence (mRS 3 to 5) and death at three months post-stroke was based on a calibrated version of the Stroke-Thrombolytic Predictive Instrument using data from routinely treated stroke patients in the Safe Implementation of Treatments in Stroke (SITS-UK) registry. Predictions in untreated patients were validated using data from the Virtual International Stroke Trials Archive (VISTA). The probability of symptomatic intracerebral haemorrhage in treated patients was incorporated using a scoring model from Safe Implementation of Thrombolysis in Stroke-Monitoring Study (SITS-MOST) data. The model predicts probabilities of haemorrhage, death, independence and dependence at 3-months, with and without thrombolysis, as a function of 13 patient characteristics. Calibration (and inclusion of additional predictors) of the Stroke-Thrombolytic Predictive Instrument (S-TPI) addressed issues of under and over prediction. Validation with VISTA data confirmed that assumptions about treatment effect were just. The C-statistics for independence and death in treated patients in the DAM were 0.793 and 0.771 respectively, and 0.776 for independence in untreated patients from VISTA. We have produced a DAM that provides an estimation of the likely benefits and risks of thrombolysis for individual patients, which has subsequently been embedded in a computerised decision aid to support better decision-making and informed consent.

  16. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  17. Toward an in-situ analytics and diagnostics framework for earth system models

    Science.gov (United States)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    , atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  18. Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method

    Directory of Open Access Journals (Sweden)

    De-Gang Wang

    2012-01-01

    Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.

  19. Models for Validation of Prior Learning (VPL)

    DEFF Research Database (Denmark)

    Ehlers, Søren

    The national policies for the education/training of adults are in the 21st century highly influenced by proposals which are formulated and promoted by The European Union (EU) as well as other transnational players and this shift in policy making has consequences. One is that ideas which in the past...... would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....

  20. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to