WorldWideScience

Sample records for validate analytical models

  1. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Lin, E.I.

    1997-01-01

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before

  2. Validated analytical modeling of diesel engine regulated exhaust CO emission rate

    Directory of Open Access Journals (Sweden)

    Waleed F Faris

    2016-06-01

    Full Text Available Albeit vehicle analytical models are often favorable for explainable mathematical trends, no analytical model has been developed of the regulated diesel exhaust CO emission rate for trucks yet. This research unprecedentedly develops and validates for trucks a model of the steady speed regulated diesel exhaust CO emission rate analytically. It has been found that the steady speed–based CO exhaust emission rate is based on (1 CO2 dissociation, (2 the water–gas shift reaction, and (3 the incomplete combustion of hydrocarbon. It has been found as well that the steady speed–based CO exhaust emission rate based on CO2 dissociation is considerably less than the rate that is based on the water–gas shift reaction. It has also been found that the steady speed–based CO exhaust emission rate based on the water–gas shift reaction is the dominant source of CO exhaust emission. The study shows that the average percentage of deviation of the steady speed–based simulated results from the corresponding field data is 1.7% for all freeway cycles with 99% coefficient of determination at the confidence level of 95%. This deviation of the simulated results from field data outperforms its counterpart of widely recognized models such as the comprehensive modal emissions model and VT-Micro for all freeway cycles.

  3. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  4. A Validated Analytical Model for Availability Prediction of IPTV Services in VANETs

    Directory of Open Access Journals (Sweden)

    Bernd E. Wolfinger

    2014-12-01

    Full Text Available In vehicular ad hoc networks (VANETs, besides the original applications typically related to traffic safety, we nowadays can observe an increasing trend toward infotainment applications, such as IPTV services. Quality of experience (QoE, as observed by the end users of IPTV, is highly important to guarantee adequate user acceptance for the service. In IPTV, QoE is mainly determined by the availability of TV channels for the users. This paper presents an efficient and rather generally applicable analytical model that allows one to predict the blocking probability of TV channels, both for channel-switching-induced, as well as for handover-induced blocking events. We present the successful validation of the model by means of simulation, and we introduce a new measure for QoE. Numerous case studies illustrate how the analytical model and our new QoE measure can be applied successfully for the dimensioning of IPTV systems, taking into account the QoE requirements of the IPTV service users in strongly diverse traffic scenarios.

  5. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    Science.gov (United States)

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. SU-E-J-145: Validation of An Analytical Model for in Vivo Range Verification Using GATE Monte Carlo Simulation in Proton Therapy

    International Nuclear Information System (INIS)

    Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K

    2015-01-01

    Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using

  7. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    Science.gov (United States)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  9. An analytic solution for numerical modeling validation in electromagnetics: the resistive sphere

    Science.gov (United States)

    Swidinsky, Andrei; Liu, Lifei

    2017-11-01

    We derive the electromagnetic response of a resistive sphere to an electric dipole source buried in a conductive whole space. The solution consists of an infinite series of spherical Bessel functions and associated Legendre polynomials, and follows the well-studied problem of a conductive sphere buried in a resistive whole space in the presence of a magnetic dipole. Our result is particularly useful for controlled-source electromagnetic problems using a grounded electric dipole transmitter and can be used to check numerical methods of calculating the response of resistive targets (such as finite difference, finite volume, finite element and integral equation). While we elect to focus on the resistive sphere in our examples, the expressions in this paper are completely general and allow for arbitrary source frequency, sphere radius, transmitter position, receiver position and sphere/host conductivity contrast so that conductive target responses can also be checked. Commonly used mesh validation techniques consist of comparisons against other numerical codes, but such solutions may not always be reliable or readily available. Alternatively, the response of simple 1-D models can be tested against well-known whole space, half-space and layered earth solutions, but such an approach is inadequate for validating models with curved surfaces. We demonstrate that our theoretical results can be used as a complementary validation tool by comparing analytic electric fields to those calculated through a finite-element analysis; the software implementation of this infinite series solution is made available for direct and immediate application.

  10. SU-E-T-479: Development and Validation of Analytical Models Predicting Secondary Neutron Radiation in Proton Therapy Applications

    International Nuclear Information System (INIS)

    Farah, J; Bonfrate, A; Donadille, L; Martinetti, F; Trompier, F; Clairand, I; De Olivera, A; Delacroix, S; Herault, J; Piau, S; Vabre, I

    2014-01-01

    Purpose: Test and validation of analytical models predicting leakage neutron exposure in passively scattered proton therapy. Methods: Taking inspiration from the literature, this work attempts to build an analytical model predicting neutron ambient dose equivalents, H*(10), within the local 75 MeV ocular proton therapy facility. MC simulations were first used to model H*(10) in the beam axis plane while considering a closed final collimator and pristine Bragg peak delivery. Next, MC-based analytical model was tested against simulation results and experimental measurements. The model was also expended in the vertical direction to enable a full 3D mapping of H*(10) inside the treatment room. Finally, the work focused on upgrading the literature model to clinically relevant configurations considering modulated beams, open collimators, patient-induced neutron fluctuations, etc. Results: The MC-based analytical model efficiently reproduced simulated H*(10) values with a maximum difference below 10%. In addition, it succeeded in predicting measured H*(10) values with differences <40%. The highest differences were registered at the closest and farthest positions from isocenter where the analytical model failed to faithfully reproduce the high neutron fluence and energy variations. The differences remains however acceptable taking into account the high measurement/simulation uncertainties and the end use of this model, i.e. radiation protection. Moreover, the model was successfully (differences < 20% on simulations and < 45% on measurements) extended to predict neutrons in the vertical direction with respect to the beam line as patients are in the upright seated position during ocular treatments. Accounting for the impact of beam modulation, collimation and the present of a patient in the beam path is far more challenging and conversion coefficients are currently being defined to predict stray neutrons in clinically representative treatment configurations. Conclusion

  11. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  12. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    Science.gov (United States)

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  14. An analytical model on thermal performance evaluation of counter flow wet cooling tower

    Directory of Open Access Journals (Sweden)

    Wang Qian

    2017-01-01

    Full Text Available This paper proposes an analytical model for simultaneous heat and mass transfer processes in a counter flow wet cooling tower, with the assumption that the enthalpy of the saturated air is a linear function of the water surface temperature. The performance of the proposed analytical model is validated in some typical cases. The validation reveals that, when cooling range is in a certain interval, the proposed model is not only comparable with the accurate model, but also can reduce computational complexity. In addition, with the proposed analytical model, the thermal performance of the counter flow wet cooling towers in power plants is calculated. The results show that the proposed analytical model can be applied to evaluate and predict the thermal performance of counter flow wet cooling towers.

  15. Determining passive cooling limits in CPV using an analytical thermal model

    Science.gov (United States)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  16. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  17. Fission product release from nuclear fuel II. Validation of ASTEC/ELSA on analytical and large scale experiments

    International Nuclear Information System (INIS)

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • A wide range of experiments is presented for the ASTEC/ELSA code validation. • Analytical tests such as AECL, ORNL and VERCORS are considered. • A large-scale experiment, PHEBUS FPT1, is considered. • The good agreement with measurements shows the efficiency of the ASTEC modelling. • Improvements concern the FP release modelling from MOX and high burn-up UO 2 fuels. - Abstract: This article is the second of two articles dedicated to the mechanisms of fission product release from a degraded core. The models of fission product release from nuclear fuel in the ASTEC code have been described in detail in the first part of this work (Brillant et al., this issue). In this contribution, the validation of ELSA, the module of ASTEC that deals with fission product and structural material release from a degraded core, is presented. A large range of experimental tests, with various temperature and conditions for the fuel surrounding atmosphere (oxidising and reducing), is thus simulated with the ASTEC code. The validation database includes several analytical experiments with both bare fuel (e.g. MCE1 experiments) and cladded fuel (e.g. HCE3, VERCORS). Furthermore, the PHEBUS large-scale experiments are used for the validation of ASTEC. The rather satisfactory comparison between ELSA calculations and experimental measurements demonstrates the efficiency of the analytical models to describe fission product release in severe accident conditions

  18. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2016-08-15

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data

  19. Valid, legally defensible data from your analytical laboratories

    International Nuclear Information System (INIS)

    Gay, D.D.; Allen, V.C.

    1989-01-01

    This paper discusses the definition of valid, legally defensible data. The authors describe the expectations of project managers and what should be gleaned from the laboratory in regard to analytical data

  20. The German cervical cancer screening model: development and validation of a decision-analytic model for cervical cancer screening in Germany.

    Science.gov (United States)

    Siebert, Uwe; Sroczynski, Gaby; Hillemanns, Peter; Engel, Jutta; Stabenow, Roland; Stegmaier, Christa; Voigt, Kerstin; Gibis, Bernhard; Hölzel, Dieter; Goldie, Sue J

    2006-04-01

    We sought to develop and validate a decision-analytic model for the natural history of cervical cancer for the German health care context and to apply it to cervical cancer screening. We developed a Markov model for the natural history of cervical cancer and cervical cancer screening in the German health care context. The model reflects current German practice standards for screening, diagnostic follow-up and treatment regarding cervical cancer and its precursors. Data for disease progression and cervical cancer survival were obtained from the literature and German cancer registries. Accuracy of Papanicolaou (Pap) testing was based on meta-analyses. We performed internal and external model validation using observed epidemiological data for unscreened women from different German cancer registries. The model predicts life expectancy, incidence of detected cervical cancer cases, lifetime cervical cancer risks and mortality. The model predicted a lifetime cervical cancer risk of 3.0% and a lifetime cervical cancer mortality of 1.0%, with a peak cancer incidence of 84/100,000 at age 51 years. These results were similar to observed data from German cancer registries, German literature data and results from other international models. Based on our model, annual Pap screening could prevent 98.7% of diagnosed cancer cases and 99.6% of deaths due to cervical cancer in women completely adherent to screening and compliant to treatment. Extending the screening interval from 1 year to 2, 3 or 5 years resulted in reduced screening effectiveness. This model provides a tool for evaluating the long-term effectiveness of different cervical cancer screening tests and strategies.

  1. Environmental concentrations of engineered nanomaterials: Review of modeling and analytical studies

    International Nuclear Information System (INIS)

    Gottschalk, Fadri; Sun, TianYin; Nowack, Bernd

    2013-01-01

    Scientific consensus predicts that the worldwide use of engineered nanomaterials (ENM) leads to their release into the environment. We reviewed the available literature concerning environmental concentrations of six ENMs (TiO 2 , ZnO, Ag, fullerenes, CNT and CeO 2 ) in surface waters, wastewater treatment plant effluents, biosolids, sediments, soils and air. Presently, a dozen modeling studies provide environmental concentrations for ENM and a handful of analytical works can be used as basis for a preliminary validation. There are still major knowledge gaps (e.g. on ENM production, application and release) that affect the modeled values, but over all an agreement on the order of magnitude of the environmental concentrations can be reached. True validation of the modeled values is difficult because trace analytical methods that are specific for ENM detection and quantification are not available. The modeled and measured results are not always comparable due to the different forms and sizes of particles that these two approaches target. -- Highlights: •Modeled environmental concentrations of engineered nanomaterials are reviewed. •Measured environmental concentrations of engineered nanomaterials are reviewed. •Possible validation of modeled data by measurements is critically evaluated. •Different approaches in modeling and measurement methods complicate validation. -- Modeled and measured environmental concentrations of engineered nanomaterials are reviewed and critically discussed

  2. Analytical dynamic modeling of fast trilayer polypyrrole bending actuators

    International Nuclear Information System (INIS)

    Amiri Moghadam, Amir Ali; Moavenian, Majid; Tahani, Masoud; Torabi, Keivan

    2011-01-01

    Analytical modeling of conjugated polymer actuators with complicated electro-chemo-mechanical dynamics is an interesting area for research, due to the wide range of applications including biomimetic robots and biomedical devices. Although there have been extensive reports on modeling the electrochemical dynamics of polypyrrole (PPy) bending actuators, mechanical dynamics modeling of the actuators remains unexplored. PPy actuators can operate with low voltage while producing large displacement in comparison to robotic joints, they do not have friction or backlash, but they suffer from some disadvantages such as creep and hysteresis. In this paper, a complete analytical dynamic model for fast trilayer polypyrrole bending actuators has been proposed and named the analytical multi-domain dynamic actuator (AMDDA) model. First an electrical admittance model of the actuator will be obtained based on a distributed RC line; subsequently a proper mechanical dynamic model will be derived, based on Hamilton's principle. The purposed modeling approach will be validated based on recently published experimental results

  3. Analytic processor model for fast design-space exploration

    NARCIS (Netherlands)

    Jongerius, R.; Mariani, G.; Anghel, A.; Dittmann, G.; Vermij, E.; Corporaal, H.

    2015-01-01

    In this paper, we propose an analytic model that takes as inputs a) a parametric microarchitecture-independent characterization of the target workload, and b) a hardware configuration of the core and the memory hierarchy, and returns as output an estimation of processor-core performance. To validate

  4. Development and Validation of Analytical Method for Losartan ...

    African Journals Online (AJOL)

    Development and Validation of Analytical Method for Losartan-Copper Complex Using UV-Vis Spectrophotometry. ... Tropical Journal of Pharmaceutical Research ... Purpose: To develop a new spectrophotometric method for the analysis of losartan potassium in pharmaceutical formulations by making its complex with ...

  5. Consistency of FMEA used in the validation of analytical procedures

    DEFF Research Database (Denmark)

    Oldenhof, M.T.; van Leeuwen, J.F.; Nauta, Maarten

    2011-01-01

    is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating......In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection...

  6. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    Science.gov (United States)

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  7. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F., E-mail: fabien.formosa@univ-savoie.f [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France)

    2011-05-15

    Research highlights: {yields} The free piston Stirling behaviour relies on its thermal and dynamic features. {yields} A global semi-analytical model for preliminary design is developed. {yields} The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  8. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    International Nuclear Information System (INIS)

    Formosa, F.

    2011-01-01

    Research highlights: → The free piston Stirling behaviour relies on its thermal and dynamic features. → A global semi-analytical model for preliminary design is developed. → The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  9. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  10. An analytical model for annular flow boiling heat transfer in microchannel heat sinks

    International Nuclear Information System (INIS)

    Megahed, A.; Hassan, I.

    2009-01-01

    An analytical model has been developed to predict flow boiling heat transfer coefficient in microchannel heat sinks. The new analytical model is proposed to predict the two-phase heat transfer coefficient during annular flow regime based on the separated model. Opposing to the majority of annular flow heat transfer models, the model is based on fundamental conservation principles. The model considers the characteristics of microchannel heat sink during annular flow and eliminates using any empirical closure relations. Comparison with limited experimental data was found to validate the usefulness of this analytical model. The model predicts the experimental data with a mean absolute error 8%. (author)

  11. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  12. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    Science.gov (United States)

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  13. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  14. Development and validation of analytical methods for dietary supplements

    International Nuclear Information System (INIS)

    Sullivan, Darryl; Crowley, Richard

    2006-01-01

    The expanding use of innovative botanical ingredients in dietary supplements and foods has resulted in a flurry of research aimed at the development and validation of analytical methods for accurate measurement of active ingredients. The pressing need for these methods is being met through an expansive collaborative initiative involving industry, government, and analytical organizations. This effort has resulted in the validation of several important assays as well as important advances in the method engineering procedures which have improved the efficiency of the process. The initiative has also allowed researchers to hurdle many of the barricades that have hindered accurate analysis such as the lack of reference standards and comparative data. As the availability for nutraceutical products continues to increase these methods will provide consumers and regulators with the scientific information needed to assure safety and dependable labeling

  15. An Analytical Model of Leakage Neutron Equivalent Dose for Passively-Scattered Proton Radiotherapy and Validation with Measurements

    International Nuclear Information System (INIS)

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-01-01

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose (H/D) at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation

  16. An Analytical Model of Leakage Neutron Equivalent Dose for Passively-Scattered Proton Radiotherapy and Validation with Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Christopher; Newhauser, Wayne, E-mail: newhauser@lsu.edu [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Baton Rouge, LA 70803 (United States); Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, LA 70809 (United States); Farah, Jad [Institut de Radioprotection et de Sûreté Nucléaire, Service de Dosimétrie Externe, BP-17, 92262 Fontenay-aux-Roses (France)

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose (H/D) at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  17. Validation of Pressure Drop Models for PHWR-type Fuel Elements

    International Nuclear Information System (INIS)

    Brasnarof Daniel; Daverio, H.

    2003-01-01

    In the present work an one-dimensional pressure drop analytical model and the COBRA code, are validated with experimental data of CANDU and Atucha fuel bundles in low and high pressure experimental test loops.Models have very good agreement with the experimental data, having less than 5 % of discrepancy. The analytical model results were compared with COBRA code results, having small difference between them in a wide range of pressure, temperature and mass flow

  18. Analytical Modelling and Simulation of Photovoltaic Panels and Arrays

    Directory of Open Access Journals (Sweden)

    H. Bourdoucen

    2007-12-01

    Full Text Available In this paper, an analytical model for PV panels and arrays based on extracted physical parameters of solar cells is developed. The proposed model has the advantage of simplifying mathematical modelling for different configurations of cells and panels without losing efficiency of PV system operation. The effects of external parameters, mainly temperature and solar irradiance have been considered in the modelling. Due to their critical effects on the operation of the panel, effects of series and shunt resistances were also studied. The developed analytical model has been easily implemented, simulated and validated using both Spice and Matlab packages for different series and parallel configurations of cells and panels. The results obtained with these two programs are in total agreement, which make the proposed model very useful for researchers and designers for quick and accurate sizing of PV panels and arrays.

  19. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  20. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  1. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  2. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    Science.gov (United States)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  3. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  4. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    Science.gov (United States)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  5. Analytical validation of Gentian NGAL particle-enhanced enhanced turbidimetric immunoassay (PETIA

    Directory of Open Access Journals (Sweden)

    Gian Luca Salvagno

    2017-08-01

    Full Text Available Objectives: This study was designed to validate the analytical performance of the new Gentian particle-enhanced enhanced turbidimetric immunoassay (PETIA for measuring neutrophil gelatinase-associated lipocalin (NGAL in serum samples. Design and methods: Analytical validation of the Gentian NGAL assay was carried out on a Roche Cobas c501 and was based on assessment of limit of blank (LOB, limit of detection (LOD, functional sensitivity, imprecision, linearity and concordance with the BioPorto NGAL test. Results: The LOB and LOD of Gentian NGAL were found to be 3.8 ng/mL and 6.3 ng/mL, respectively. An analytical coefficient of variation (CV of 20% corresponded to a NGAL value of 10 ng/mL. The intra-assay and inter-assay imprecision (CV was between 0.4 and 5.2% and 0.6 and 7.1% and the total imprecision (CV was 3.7%. The linearity was optimal at NGAL concentrations between 37 and 1420 ng/mL (r=1.00; p<0.001. An excellent correlation was observed between values measured with Gentian NGAL and BioPorto NGAL in 74 routine serum samples (r=0.993. The mean percentage bias of the Gentian assay versus the Bioporto assay was +3.1% (95% CI, +1.6% to +4.5%. Conclusions: These results show that Gentian NGAL may be a viable option to other commercial immunoassays for both routine and urgent assessment of serum NGAL. Keywords: Neutrophil gelatinase-associated lipocalin, NGAL, Analytical validation, Acute kidney injury

  6. Validating and Determining the Weight of Items Used for Evaluating Clinical Governance Implementation Based on Analytic Hierarchy Process Model

    Directory of Open Access Journals (Sweden)

    Elaheh Hooshmand

    2015-10-01

    Full Text Available Background The purpose of implementing a system such as Clinical Governance (CG is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. Methods The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP model. Results The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients’ non-medical needs, complaints and patients’ participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients’ non-medical needs, patients’ participation in the treatment process and research and development. Conclusion The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety.

  7. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    Science.gov (United States)

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  8. Analytic nuclear scattering theories

    International Nuclear Information System (INIS)

    Di Marzio, F.; University of Melbourne, Parkville, VIC

    1999-01-01

    A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed

  9. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  10. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  11. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  12. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  13. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    Science.gov (United States)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  14. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    Science.gov (United States)

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  15. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  16. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    Science.gov (United States)

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  17. Analytical calculation of detailed model parameters of cast resin dry-type transformers

    International Nuclear Information System (INIS)

    Eslamian, M.; Vahidi, B.; Hosseinian, S.H.

    2011-01-01

    Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.

  18. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  19. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    Science.gov (United States)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  20. A three-dimensional (3D) analytical model for subthreshold characteristics of uniformly doped FinFET

    Science.gov (United States)

    Tripathi, Shweta; Narendar, Vadthiya

    2015-07-01

    In this paper, three dimensional (3D) analytical model for subthreshold characteristics of doped FinFET has been presented. The separation of variables technique is used to solve the 3D Poisson's equation analytically with appropriate boundary conditions so as to obtain the expression for channel potential. The thus obtained potential distribution function has been employed in deriving subthreshold current and subthreshold slope model. The channel potential characteristics have been studied as a function of various device parameters such as gate length, gate oxide thickness and channel doping. The proposed analytical model results have been validated by comparing with the simulation data obtained by the 3D device simulator ATLAS™ from Silvaco.

  1. Analytical model of a burst assembly algorithm for the VBR in the OBS networks

    International Nuclear Information System (INIS)

    Shargabi, M.A.A.; Mellah, H.; Abid, A.

    2008-01-01

    This paper presents a proposed analytical model for the number of bursts aggregated in a period of time in OBS networks. The model considers the case of VBR traffic with two different sending rates, which are SCR and PCR. The model is validated using extensive simulations. Where results from simulations are in total agreement with the results obtained by the proposed model. (author)

  2. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    Science.gov (United States)

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    International Nuclear Information System (INIS)

    Li Jin; Liu Hongxia; Li Bin; Cao Lei; Yuan Bo

    2010-01-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a 'rollup' in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations. (semiconductor devices)

  4. Risk analysis by FMEA as an element of analytical validation.

    Science.gov (United States)

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  5. Risk analysis of analytical validations by probabilistic modification of FMEA.

    Science.gov (United States)

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    Science.gov (United States)

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Modeling Run Test Validity: A Meta-Analytic Approach

    National Research Council Canada - National Science Library

    Vickers, Ross

    2002-01-01

    .... This study utilized data from 166 samples (N = 5,757) to test the general hypothesis that differences in testing methods could account for the cross-situational variation in validity. Only runs >2 km...

  8. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  9. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling; O' Brien, James

    2016-11-01

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and

  10. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  11. Consistency of FMEA used in the validation of analytical procedures.

    Science.gov (United States)

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    Science.gov (United States)

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  13. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  14. AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES

    International Nuclear Information System (INIS)

    Robinson, Tyler D.; Catling, David C.

    2012-01-01

    We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions for the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.

  15. Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation

    Science.gov (United States)

    Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun

    2018-03-01

    A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).

  16. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  17. Vibration Based Diagnosis for Planetary Gearboxes Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Liu Hong

    2016-01-01

    Full Text Available The application of conventional vibration based diagnostic techniques to planetary gearboxes is a challenge because of the complexity of frequency components in the measured spectrum, which is the result of relative motions between the rotary planets and the fixed accelerometer. In practice, since the fault signatures are usually contaminated by noises and vibrations from other mechanical components of gearboxes, the diagnostic efficacy may further deteriorate. Thus, it is essential to develop a novel vibration based scheme to diagnose gear failures for planetary gearboxes. Following a brief literature review, the paper begins with the introduction of an analytical model of planetary gear-sets developed by the authors in previous works, which can predict the distinct behaviors of fault introduced sidebands. This analytical model is easy to implement because the only prerequisite information is the basic geometry of the planetary gear-set. Afterwards, an automated diagnostic scheme is proposed to cope with the challenges associated with the characteristic configuration of planetary gearboxes. The proposed vibration based scheme integrates the analytical model, a denoising algorithm, and frequency domain indicators into one synergistic system for the detection and identification of damaged gear teeth in planetary gearboxes. Its performance is validated with the dynamic simulations and the experimental data from a planetary gearbox test rig.

  18. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    Science.gov (United States)

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  19. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  20. Steady-state analytical model of suspended p-type 3C-SiC bridges under consideration of Joule heating

    Science.gov (United States)

    Balakrishnan, Vivekananthan; Dinh, Toan; Phan, Hoang-Phuong; Kozeki, Takahiro; Namazu, Takahiro; Viet Dao, Dzung; Nguyen, Nam-Trung

    2017-07-01

    This paper reports an analytical model and its validation for a released microscale heater made of 3C-SiC thin films. A model for the equivalent electrical and thermal parameters was developed for the two-layer multi-segment heat and electric conduction. The model is based on a 1D energy equation, which considers the temperature-dependent resistivity and allows for the prediction of voltage-current and power-current characteristics of the microheater. The steady-state analytical model was validated by experimental characterization. The results, in particular the nonlinearity caused by temperature dependency, are in good agreement. The low power consumption of the order of 0.18 mW at approximately 310 K indicates the potential use of the structure as thermal sensors in portable applications.

  1. Experimental validation of TASS/SMR-S critical flow model for the integral reactor SMART

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Si Won; Ra, In Sik; Kim, Kun Yeup [ACT Co., Daejeon (Korea, Republic of); Chung, Young Jong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    An advanced integral PWR, SMART (System- Integrated Modular Advanced ReacTor) is being developed in KAERI. It has a compact size and a relatively small power rating (330MWt) compared to a conventional reactor. Because new concepts are applied to SMART, an experimental and analytical validation is necessary for the safety evaluation of SMART. The analytical safety validation is being accomplished by a safety analysis code for an integral reactor, TASS/SMR-S developed by KAERI. TASS/SMR-S uses a lumped parameter one dimensional node and path modeling for the thermal hydraulic calculation and it uses point kinetics for the reactor power calculation. It has models for a general usage such as a core heat transfer model, a wall heat structure model, a critical flow model, component models, and it also has many SMART specific models such as an once through helical coiled steam generator model, and a condensate heat transfer model. To ensure that the TASS/SMR-S code has the calculation capability for the safety evaluation of SMART, the code should be validated for the specific models with the separate effect test experimental results. In this study, TASS/SMR-S critical flow model is evaluated as compared with SMD (Super Moby Dick) experiment

  2. A novel stress distribution analytical model of O-ring seals under different properties of materials

    International Nuclear Information System (INIS)

    Wu, Di; Wang, Shao Ping; Wang, Xing Jian

    2017-01-01

    The elastomeric O-ring seals have been widely used as sealing elements in hydraulic systems. The sealing performance of O-ring seals is related to stress distribution. The stresses distribution depends on the squeeze rate and internal pressure, and would vary with properties of O-ring seals materials. Thus, in order to study the sealing performance of O-ring seals, it is necessary to describe the analytic relationship between stress distribution and properties of O-ring seals materials. For this purpose, a novel Stress distribution analytical model (SDAM) is proposed in this paper. The analytical model utilizes two stress complex functions to describe the stress distribution of O-ring seals. The proposed SDAM can express not only the analytical relationship between stress distribution and Young’s modulus, but also the one between stress distribution and Poisson’s ratio. Finally, compared results between finite element analysis and the SDAM validate that the proposed model can effectively reveal the stress distribution under different properties for O-ring materials

  3. A novel stress distribution analytical model of O-ring seals under different properties of materials

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Di; Wang, Shao Ping; Wang, Xing Jian [School of Automation Science and Electrical Engineering, Beihang University, Beijing (China)

    2017-01-15

    The elastomeric O-ring seals have been widely used as sealing elements in hydraulic systems. The sealing performance of O-ring seals is related to stress distribution. The stresses distribution depends on the squeeze rate and internal pressure, and would vary with properties of O-ring seals materials. Thus, in order to study the sealing performance of O-ring seals, it is necessary to describe the analytic relationship between stress distribution and properties of O-ring seals materials. For this purpose, a novel Stress distribution analytical model (SDAM) is proposed in this paper. The analytical model utilizes two stress complex functions to describe the stress distribution of O-ring seals. The proposed SDAM can express not only the analytical relationship between stress distribution and Young’s modulus, but also the one between stress distribution and Poisson’s ratio. Finally, compared results between finite element analysis and the SDAM validate that the proposed model can effectively reveal the stress distribution under different properties for O-ring materials.

  4. Risk analysis of analytical validations by probabilistic modification of FMEA

    DEFF Research Database (Denmark)

    Barends, D.M.; Oldenhof, M.T.; Vredenbregt, M.J.

    2012-01-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection...... and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring...... of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence...

  5. Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM

    Directory of Open Access Journals (Sweden)

    Łapczyński Mariusz

    2014-08-01

    Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.

  6. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    Science.gov (United States)

    Kumar, V. R. Sanal; Sankar, Vigneshwaran; Chandrasekaran, Nichith; Saravanan, Vignesh; Natarajan, Vishnu; Padmanabhan, Sathyan; Sukumaran, Ajith; Mani, Sivabalan; Rameshkumar, Tharikaa; Nagaraju Doddi, Hema Sai; Vysaprasad, Krithika; Sharan, Sharad; Murugesh, Pavithra; Shankar, S. Ganesh; Nejaamtheen, Mohammed Niyasdeen; Baskaran, Roshan Vignesh; Rahman Mohamed Rafic, Sulthan Ariff; Harisrinivasan, Ukeshkumar; Srinivasan, Vivek

    2018-02-01

    A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  7. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    Directory of Open Access Journals (Sweden)

    V. R. Sanal Kumar

    2018-02-01

    Full Text Available A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  8. Analytical modelling of hydrogen transport in reactor containments

    International Nuclear Information System (INIS)

    Manno, V.P.

    1983-09-01

    A versatile computational model of hydrogen transport in nuclear plant containment buildings is developed. The background and significance of hydrogen-related nuclear safety issues are discussed. A computer program is constructed that embodies the analytical models. The thermofluid dynamic formulation spans a wide applicability range from rapid two-phase blowdown transients to slow incompressible hydrogen injection. Detailed ancillary models of molecular and turbulent diffusion, mixture transport properties, multi-phase multicomponent thermodynamics and heat sink modelling are addressed. The numerical solution of the continuum equations emphasizes both accuracy and efficiency in the employment of relatively coarse discretization and long time steps. Reducing undesirable numerical diffusion is addressed. Problem geometry options include lumped parameter zones, one dimensional meshs, two dimensional Cartesian or axisymmetric coordinate systems and three dimensional Cartesian or cylindrical regions. An efficient lumped nodal model is included for simulation of events in which spatial resolution is not significant. Several validation calculations are reported

  9. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  10. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  11. Principles of Single-Laboratory Validation of Analytical Methods for Testing the Chemical Composition of Pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Underlying theoretical and practical approaches towards pesticide formulation analysis are discussed, i.e. general principles, performance characteristics, applicability of validation data, verification of method performance, and adaptation of validated methods by other laboratories. The principles of single laboratory validation of analytical methods for testing the chemical composition of pesticides are outlined. Also the theoretical background is described for performing pesticide formulation analysis as outlined in ISO, CIPAC/AOAC and IUPAC guidelines, including methodological characteristics such as specificity, selectivity, linearity, accuracy, trueness, precision and bias. Appendices I–III hereof give practical and elaborated examples on how to use the Horwitz approach and formulae for estimating the target standard deviation towards acceptable analytical repeatability. The estimation of trueness and the establishment of typical within-laboratory reproducibility are treated in greater detail by means of worked-out examples. (author)

  12. Cellular Scanning Strategy for Selective Laser Melting: Capturing Thermal Trends with a Low-Fidelity, Pseudo-Analytical Model

    Directory of Open Access Journals (Sweden)

    Sankhya Mohanty

    2014-01-01

    Full Text Available Simulations of additive manufacturing processes are known to be computationally expensive. The resulting large runtimes prohibit their application in secondary analysis requiring several complete simulations such as optimization studies, and sensitivity analysis. In this paper, a low-fidelity pseudo-analytical model has been introduced to enable such secondary analysis. The model has been able to mimic a finite element model and was able to capture the thermal trends associated with the process. The model has been validated and subsequently applied in a small optimization case study. The pseudo-analytical modelling technique is established as a fast tool for primary modelling investigations.

  13. Analytical Plan for Roman Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.; Schwantes, Jon M.; Olszta, Matthew J.; Thevuthasan, Suntharampillai; Heeren, Ronald M.

    2011-01-01

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University of Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.

  14. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  16. SEMICONDUCTOR DEVICES: Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    Science.gov (United States)

    Jin, Li; Hongxia, Liu; Bin, Li; Lei, Cao; Bo, Yuan

    2010-08-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a “rollup" in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations.

  17. Analytic model for ultrasound energy receivers and their optimal electric loads

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  18. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    Science.gov (United States)

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as

  19. Short-Term Predictive Validity of Cluster Analytic and Dimensional Classification of Child Behavioral Adjustment in School

    Science.gov (United States)

    Kim, Sangwon; Kamphaus, Randy W.; Baker, Jean A.

    2006-01-01

    A constructive debate over the classification of child psychopathology can be stimulated by investigating the validity of different classification approaches. We examined and compared the short-term predictive validity of cluster analytic and dimensional classifications of child behavioral adjustment in school using the Behavior Assessment System…

  20. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  1. Assessment model validity document - HYDRASTAR. A stochastic continuum program for groundwater flow

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Eriksson, Lars [Equa Simulation AB, Sundbyberg (Sweden)

    2001-12-01

    The prevailing document addresses validation of the stochastic continuum model HYDRASTAR designed for Monte Carlo simulations of groundwater flow in fractured rocks. Here, validation is defined as a process to demonstrate that a model concept is fit for its purpose. Preferably, the validation is carried out by comparison of model predictions with independent field observations and experimental measurements. In addition, other sources can also be used to confirm that the model concept gives acceptable results. One method is to compare results with the ones achieved using other model concepts for the same set of input data. Another method is to compare model results with analytical solutions. The model concept HYDRASTAR has been used in several studies including performance assessments of hypothetical repositories for spent nuclear fuel. In the performance assessments, the main tasks for HYDRASTAR have been to calculate groundwater travel time distributions, repository flux distributions, path lines and their exit locations. The results have then been used by other model concepts to calculate the near field release and far field transport. The aim and framework for the validation process includes describing the applicability of the model concept for its purpose in order to build confidence in the concept. Preferably, this is made by comparisons of simulation results with the corresponding field experiments or field measurements. Here, two comparisons with experimental results are reported. In both cases the agreement was reasonably fair. In the broader and more general context of the validation process, HYDRASTAR results have been compared with other models and analytical solutions. Commonly, the approximation calculations agree well with the medians of model ensemble results. Additional indications that HYDRASTAR is suitable for its purpose were obtained from the comparisons with results from other model concepts. Several verification studies have been made for

  2. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  3. Unified analytical threshold voltage model for non-uniformly doped dual metal gate fully depleted silicon-on-insulator MOSFETs

    Science.gov (United States)

    Rao, Rathnamala; Katti, Guruprasad; Havaldar, Dnyanesh S.; DasGupta, Nandita; DasGupta, Amitava

    2009-03-01

    The paper describes the unified analytical threshold voltage model for non-uniformly doped, dual metal gate (DMG) fully depleted silicon-on-insulator (FDSOI) MOSFETs based on the solution of 2D Poisson's equation. 2D Poisson's equation is solved analytically for appropriate boundary conditions using separation of variables technique. The solution is then extended to obtain the threshold voltage of the FDSOI MOSFET. The model is able to handle any kind of non-uniform doping, viz. vertical, lateral as well as laterally asymetric channel (LAC) profile in the SOI film in addition to the DMG structure. The analytical results are validated with the numerical simulations using the device simulator MEDICI.

  4. Analytical Modelling of Wireless Power Transfer (WPT) Systems for Electric Vehicle Application

    Energy Technology Data Exchange (ETDEWEB)

    Chinthavali, Madhu Sudhan [ORNL; Campbell, Steven L [ORNL

    2016-01-01

    This paper presents an analytical model for wireless power transfer system used in electric vehicle application. The equivalent circuit model for each major component of the system is described, including the input voltage source, resonant network, transformer, nonlinear diode rectifier load, etc. Based on the circuit model, the primary side compensation capacitance, equivalent input impedance, active / reactive power are calculated, which provides a guideline for parameter selection. Moreover, the voltage gain curve from dc output to dc input is derived as well. A hardware prototype with series-parallel resonant stage is built to verify the developed model. The experimental results from the hardware are compared with the model predicted results to show the validity of the model.

  5. Analytical validation of a melanoma diagnostic gene signature using formalin-fixed paraffin-embedded melanocytic lesions.

    Science.gov (United States)

    Warf, M Bryan; Flake, Darl D; Adams, Doug; Gutin, Alexander; Kolquist, Kathryn A; Wenstrup, Richard J; Roa, Benjamin B

    2015-01-01

    These studies were to validate the analytical performance of a gene expression signature that differentiates melanoma and nevi, using RNA expression from 14 signature genes and nine normalization genes that generates a melanoma diagnostic score (MDS). Formalin-fixed paraffin-embedded melanocytic lesions were evaluated in these studies. The overall SD of the assay was determined to be 0.69 MDS units. Individual amplicons within the signature had an average amplification efficiency of 92% and a SD less than 0.5 CT. The MDS was reproducible across a 2000-fold dilution range of input RNA. Melanin, an inhibitor of PCR, does not interfere with the signature. These studies indicate this signature is robust and reproducible and is analytically validated on formalin-fixed paraffin-embedded melanocytic lesions.

  6. Validation of an analytical method for the determination of polycyclic aromatic hydrocarbons by high efficiency liquid chromatography in PM10 and PM2,5 particles

    International Nuclear Information System (INIS)

    Herrera Murillo, Jorge; Chaves Villalobos, Maria del Carmen

    2012-01-01

    An analytical method was validated for polycyclic aromatic hydrocarbons in PM10 and PM2,5 particles collected from air by high performance liquid chromatography (HPLC) was validated. The PAHs analyzed in the methodology include: Naphthalene, Acenaphthylene, Fluorene, Acenaphthene, Phenanthrene, Anthracene, fluoranthene, pyrene, Benzo (a)anthracene, Chrysene, Benzo (b)fluoranthene, Benzo (k)fluoranthene, Benzo (a)pyrene, Dibenzo (a, h)anthracene, Benzo (g, h, i)perylene and Indeno (1,2,3-CD)pyrene. For these compounds, the detection limit and quantification limit have been between 0,02 and 0,1 mg/l. An equipment DIONEX, ICS 3000 model is used, that has two in series detectors: one ultraviolet model VWD-1, and fluorescence detector, model RF-2000, separating the different absorption and emission signals for proper identification of individual compounds. For all the compounds analyzed, the recovery factor has found not significantly different from each other and the repeatability and reproducibility has been to be suitable for an analytical method, especially for the lighter PAHs. (author) [es

  7. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  8. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    International Nuclear Information System (INIS)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-01-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  9. Evaluation and analytical validation of a handheld digital refractometer for urine specific gravity measurement

    Directory of Open Access Journals (Sweden)

    Sara P. Wyness

    2016-08-01

    Full Text Available Objectives: Refractometers are commonly used to determine urine specific gravity (SG in the assessment of hydration status and urine specimen validity testing. Few comprehensive performance evaluations are available demonstrating refractometer capability from a clinical laboratory perspective. The objective of this study was therefore to conduct an analytical validation of a handheld digital refractometer used for human urine SG testing. Design and methods: A MISCO Palm Abbe™ refractometer was used for all experiments, including device familiarization, carryover, precision, accuracy, linearity, analytical sensitivity, evaluation of potential substances which contribute to SG (i.e. “interference”, and reference interval evaluation. A manual refractometer, urine osmometer, and a solute score (sum of urine chloride, creatinine, glucose, potassium, sodium, total protein, and urea nitrogen; all in mg/dL were used as comparative methods for accuracy assessment. Results: Significant carryover was not observed. A wash step was still included as good laboratory practice. Low imprecision (%CV, <0.01 was demonstrated using low and high QC material. Accuracy studies showed strong correlation to manual refractometry. Linear correlation was also demonstrated between SG, osmolality, and solute score. Linearity of Palm Abbe performance was verified with observed error of ≤0.1%. Increases in SG were observed with increasing concentrations of albumin, creatinine, glucose, hemoglobin, sodium chloride, and urea. Transference of a previously published urine SG reference interval of 1.0020–1.0300 was validated. Conclusions: The Palm Abbe digital refractometer was a fast, simple, and accurate way to measure urine SG. Analytical validity was confirmed by the present experiments. Keywords: Specific gravity, Osmolality, Digital refractometry, Hydration, Sports medicine, Urine drug testing, Urine adulteration

  10. Analytical validation of operator actions in case of primary to secondary leakage for VVER-1000/V320

    Energy Technology Data Exchange (ETDEWEB)

    Andreeva, M., E-mail: m_andreeva@inrne.bas.bg; Groudev, P., E-mail: pavlinpg@inrne.bas.bg; Pavlova, M., E-mail: pavlova@inrne.bas.bg

    2015-12-15

    Highlights: • We validate operator actions in case of primary to secondary leakage. • We perform four scenarios related to SGTR accident for VVER-1000/V320. • The reference power plant for the analyses is Unit 6 at Kozloduy NPP. • The RELAP5/MOD 3.2 computer code is used in performing the analyses. • The analyses confirm the effectiveness of operator actions during PRISE. - Abstract: This paper presents the results of analytical validation of operator actions in case of “Steam Generator Tube Rupture” (SGTR) for VVER-1000/V320 units at Kozloduy Nuclear Power Plant (KNPP), done during the development of Symptom Based Emergency Operating Procedures (SB EOPs) for this plant. The purpose of the analyses is to demonstrate the ability to terminate primary to secondary leakage and to indicate an effective strategy for preventing secondary leakage to the environment and in this way to prevent radiological release to the environment. Following depressurization and cooldown of reactor coolant system (RCS) with isolation of the affected steam generator (SG), in these analyses are validated options for post-SGTR cooldown by: • back up filling the ruptured SG; • using letdown system in the affected SG and • by opening Fast Acting Isolation Valve (FAIV) and using Steam Dump Facility to the Condenser (BRU-K). The results of the thermal-hydraulic analyses have been used to assist KNPP specialists in analytical validation of EOPs. The RELAP5/MOD3.2 computer code has been used for the analyses in a VVER-1000 Nuclear Power Plant (NPP) model. A model of VVER-1000 based on Unit 6 of Kozloduy NPP has been developed for the thermal-hydraulics code RELAP5/MOD3.2 at the Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Sciences (INRNE-BAS). This paper is possible through the participation of leading specialists from KNPP.

  11. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  12. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  13. Analytical Modeling Of The Steinmetz Coefficient For Single-Phase Transformer Eddy Current Loss Prediction

    Directory of Open Access Journals (Sweden)

    T. Aly Saandy

    2015-08-01

    Full Text Available Abstract This article presents to an analytical calculation methodology of the Steinmetz coefficient applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active power consumed by the core is expressed analytically in function of the electrical parameters as resistivity and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The required coefficient is identified from the empirical Steinmetz data based on the experimented active power expression. To verify the relevance of the model validations both by simulations with two in two different frequencies and measurements were carried out. The obtained results are in good agreement with the theoretical approach and the practical results.

  14. Validation of cell voltage and water content in a PEM (polymer electrolyte membrane) fuel cell model using neutron imaging for different operating conditions

    International Nuclear Information System (INIS)

    Salva, J. Antonio; Iranzo, Alfredo; Rosa, Felipe; Tapia, Elvira

    2016-01-01

    This work presents a one dimensional analytical model developed for a 50 cm"2 PEM (polymer electrolyte membrane) fuel cell with five-channel serpentine flow field. The different coupled physical phenomena such as electrochemistry, mass transfer of hydrogen, oxygen and water (two phases) together with heat transfer have been solved simultaneously. The innovation of this work is that the model has been validated with two different variables simultaneously and quantitatively in order to ensure the accuracy of the results. The selected variables are the cell voltage and the water content within the membrane MEA (Membrane Electrode Assembly) and GDL (gas diffusion layers) experimentally measured by means of neutron radiography. The results show a good agreement for a comprehensive set of different operating conditions of cell temperature, pressure, reactants relative humidity and cathode stoichiometry. The analytical model has a relative error less than 3.5% for the value of the cell voltage and the water content within the GDL + MEA for all experiments performed. This result presents a new standard of validation in the state of art of PEM fuel cell modeling where two variables are simultaneously and quantitatively validated with experimental results. The developed analytical model has been used in order to analyze the behavior of the PEM fuel cell under different values of relative humidity. - Highlights: • One dimensional analytical model has been developed for a PEM fuel cell. • The model is validated with two different variables simultaneously. • New standard of validation is proposed.

  15. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  16. Experimental Validation of a Permeability Model for Enrichment Membranes

    International Nuclear Information System (INIS)

    Orellano, Pablo; Brasnarof, Daniel; Florido Pablo

    2003-01-01

    An experimental loop with a real scale diffuser, in a single enrichment-stage configuration, was operated with air at different process conditions, in order to characterize the membrane permeability.Using these experimental data, an analytical geometric-and-morphologic-based model was validated.It is conclude that a new set of independent measurements, i.e. enrichment, is necessary in order to fully characterize diffusers, because of its internal parameters are not univocally determinated with permeability experimental data only

  17. Analytical modeling of Schottky tunneling source impact ionization MOSFET with reduced breakdown voltage

    Directory of Open Access Journals (Sweden)

    Sangeeta Singh

    2016-03-01

    Full Text Available In this paper, we have investigated a novel Schottky tunneling source impact ionization MOSFET (STS-IMOS to lower the breakdown voltage of conventional impact ionization MOS (IMOS and developed an analytical model for the same. In STS-IMOS there is an accumulative effect of both impact ionization and source induced barrier tunneling. The silicide source offers very low parasitic resistance, the outcome of which is an increment in voltage drop across the intrinsic region for the same applied bias. This reduces operating voltage and hence, it exhibits a significant reduction in both breakdown and threshold voltage. STS-IMOS shows high immunity against hot electron damage. As a result of this the device reliability increases magnificently. The analytical model for impact ionization current (Iii is developed based on the integration of ionization integral (M. Similarly, to get Schottky tunneling current (ITun expression, Wentzel–Kramers–Brillouin (WKB approximation is employed. Analytical models for threshold voltage and subthreshold slope is optimized against Schottky barrier height (ϕB variation. The expression for the drain current is computed as a function of gate-to-drain bias via integral expression. It is validated by comparing it with the technology computer-aided design (TCAD simulation results as well. In essence, this analytical framework provides the physical background for better understanding of STS-IMOS and its performance estimation.

  18. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    Science.gov (United States)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  19. An analytic current-voltage model for quasi-ballistic III-nitride high electron mobility transistors

    Science.gov (United States)

    Li, Kexin; Rakheja, Shaloo

    2018-05-01

    We present an analytic model to describe the DC current-voltage (I-V) relationship in scaled III-nitride high electron mobility transistors (HEMTs) in which transport within the channel is quasi-ballistic in nature. Following Landauer's transport theory and charge calculation based on two-dimensional electrostatics that incorporates negative momenta states from the drain terminal, an analytic expression for current as a function of terminal voltages is developed. The model interprets the non-linearity of access regions in non-self-aligned HEMTs. Effects of Joule heating with temperature-dependent thermal conductivity are incorporated in the model in a self-consistent manner. With a total of 26 input parameters, the analytic model offers reduced empiricism compared to existing GaN HEMT models. To verify the model, experimental I-V data of InAlN/GaN with InGaN back-barrier HEMTs with channel lengths of 42 and 105 nm are considered. Additionally, the model is validated against numerical I-V data obtained from DC hydrodynamic simulations of an unintentionally doped AlGaN-on-GaN HEMT with 50-nm gate length. The model is also verified against pulsed I-V measurements of a 150-nm T-gate GaN HEMT. Excellent agreement between the model and experimental and numerical results for output current, transconductance, and output conductance is demonstrated over a broad range of bias and temperature conditions.

  20. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Science.gov (United States)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  1. A modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array

    International Nuclear Information System (INIS)

    Liang, Guanhao; Wang, Yancheng; Mei, Deqing; Xi, Kailun; Chen, Zichen

    2015-01-01

    This paper presents a modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array, which utilizes solid polydimethylsiloxane (PDMS) film as the dielectric layer. To predict the deformation of the sensing unit and capacitance changes, each sensing unit is simplified into a three-layer plate structure and divided into central, edge and corner regions. The plate structure and the three regions are studied by the general and modified models, respectively. For experimental validation, the capacitive tactile sensor array with 8  ×  8 (= 64) sensing units is fabricated. Experiments are conducted by measuring the capacitance changes versus applied external forces and compared with the general and modified models’ predictions. For the developed tactile sensor array, the sensitivity predicted by the modified analytical model is 1.25%/N, only 0.8% discrepancy from the experimental measurement. Results demonstrate that the modified analytical model can accurately predict the sensing performance of the sensor array and could be utilized for model-based optimal capacitive tactile sensor array design. (paper)

  2. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  3. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    Science.gov (United States)

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  4. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    International Nuclear Information System (INIS)

    Pérez-Andújar, Angélica; Zhang, Rui; Newhauser, Wayne

    2013-01-01

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w R , as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w R was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w R which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis

  5. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A.

    2008-12-17

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses on validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.

  6. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    Science.gov (United States)

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or

  7. An analytic model for gate-all-around silicon nanowire tunneling field effect transistors

    International Nuclear Information System (INIS)

    Liu Ying; He Jin; Chan Mansun; Ye Yun; Zhao Wei; Wu Wen; Deng Wan-Ling; Wang Wen-Ping; Du Cai-Xia

    2014-01-01

    An analytical model of gate-all-around (GAA) silicon nanowire tunneling field effect transistors (NW-TFETs) is developted based on the surface potential solutions in the channel direction and considering the band to band tunneling (BTBT) efficiency. The three-dimensional Poisson equation is solved to obtain the surface potential distributions in the partition regions along the channel direction for the NW-TFET, and a tunneling current model using Kane's expression is developed. The validity of the developed model is shown by the good agreement between the model predictions and the TCAD simulation results. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  8. Polarimetric and angular light-scattering from dense media: Comparison of a vectorial radiative transfer model with analytical, stochastic and experimental approaches

    International Nuclear Information System (INIS)

    Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent

    2013-01-01

    Our work presents computations via a vectorial radiative transfer model of the polarimetric and angular light scattered by a stratified dense medium with small and intermediate optical thickness. We report the validation of this model using analytical results and different computational methods like stochastic algorithms. Moreover, we check the model with experimental data from a specific scatterometer developed at the Onera. The advantages and disadvantages of a radiative approach are discussed. This paper represents a step toward the characterization of particles in dense media involving multiple scattering. -- Highlights: • A vectorial radiative transfer model to simulate the light scattered by stratified layers is developed. • The vectorial radiative transfer equation is solved using an adding–doubling technique. • The results are compared to analytical and stochastic data. • Validation with experimental data from a scatterometer developed at Onera is presented

  9. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A

    2009-05-27

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.

  10. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  11. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  12. High Turbidity Solis Clear Sky Model: Development and Validation

    Directory of Open Access Journals (Sweden)

    Pierre Ineichen

    2018-03-01

    Full Text Available The Solis clear sky model is a spectral scheme based on radiative transfer calculations and the Lambert–Beer relation. Its broadband version is a simplified fast analytical version; it is limited to broadband aerosol optical depths lower than 0.45, which is a weakness when applied in countries with very high turbidity such as China or India. In order to extend the use of the original simplified version of the model for high turbidity values, we developed a new version of the broadband Solis model based on radiative transfer calculations, valid for turbidity values up to 7, for the three components, global, beam, and diffuse, and for the four aerosol types defined by Shettle and Fenn. A validation of low turbidity data acquired in Geneva shows slightly better results than the previous version. On data acquired at sites presenting higher turbidity data, the bias stays within ±4% for the beam and the global irradiances, and the standard deviation around 5% for clean and stable condition data and around 12% for questionable data and variable sky conditions.

  13. Analytical validation of a new point-of-care assay for serum amyloid A in horses.

    Science.gov (United States)

    Schwartz, D; Pusterla, N; Jacobsen, S; Christopher, M M

    2018-01-17

    Serum amyloid A (SAA) is a major acute phase protein in horses. A new point-of-care (POC) test for SAA (Stablelab) is available, but studies evaluating its analytical accuracy are lacking. To evaluate the analytical performance of the SAA POC test by 1) determining linearity and precision, 2) comparing results in whole blood with those in serum or plasma, and 3) comparing POC results with those obtained using a previously validated turbidimetric immunoassay (TIA). Assay validation. Analytical validation of the POC test was done in accordance with American Society of Veterinary Clinical Pathology guidelines using residual equine serum/plasma and whole blood samples from the Clinical Pathology Laboratory at the University of California-Davis. A TIA was used as the reference method. We also evaluated the effect of haematocrit (HCT). The POC test was linear for SAA concentrations of up to at least 1000 μg/mL (r = 0.991). Intra-assay CVs were 13, 18 and 15% at high (782 μg/mL), intermediate (116 μg/mL) and low (64 μg/mL) concentrations. Inter-assay (inter-batch) CVs were 45, 14 and 15% at high (1372 μg/mL), intermediate (140 μg/mL) and low (56 μg/mL) concentrations. SAA results in whole blood were significantly lower than those in serum/plasma (P = 0.0002), but were positively correlated (r = 0.908) and not affected by HCT (P = 0.261); proportional negative bias was observed in samples with SAA>500 μg/mL. The difference between methods exceeded the 95% confidence interval of the combined imprecision of both methods (15%). Analytical validation could not be performed in whole blood, the sample most likely to be used stall side. The POC test has acceptable accuracy and precision in equine serum/plasma with SAA concentrations of up to at least 1000 μg/mL. Low inter-batch precision at high concentrations may affect serial measurements, and the use of the same test batch and sample type (serum/plasma or whole blood) is recommended. Comparison of results between the

  14. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 2, structural implementation and validation

    Science.gov (United States)

    Milani, G.; Bertolesi, E.

    2017-07-01

    The simple quasi analytical holonomic homogenization approach for the non-linear analysis of in-plane loaded masonry presented in Part 1 is here implemented at a structural leveland validated. For such implementation, a Rigid Body and Spring Mass model (RBSM) is adopted, relying into a numerical modelling constituted by rigid elements interconnected by homogenized inelastic normal and shear springs placed at the interfaces between adjoining elements. Such approach is also known as HRBSM. The inherit advantage is that it is not necessary to solve a homogenization problem at each load step in each Gauss point, and a direct implementation into a commercial software by means of an external user supplied subroutine is straightforward. In order to have an insight into the capabilities of the present approach to reasonably reproduce masonry behavior at a structural level, non-linear static analyses are conducted on a shear wall, for which experimental and numerical data are available in the technical literature. Quite accurate results are obtained with a very limited computational effort.

  15. Parametric study of a turbocompound diesel engine based on an analytical model

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yin, Yong; Zhao, Yanting; Chen, Zhen

    2016-01-01

    Turbocompounding is an important technique to recover waste heat from engine exhaust and reduce CO_2 emission. This paper presents a parametric study of turbocompound diesel engine based on analytical model. An analytical model was developed to investigate the influence of system parameters on the engine fuel consumption. The model is based on thermodynamics knowledge and empirical models, which can consider the impacts of each parameter independently. The effects of turbine efficiency, back pressure, exhaust temperature, pressure ratio and engine speed on the recovery energy, pumping loss and engine fuel reductions were studied. Results show that turbine efficiency, exhaust temperature and back pressure has great influence on the fuel reduction and optimal power turbine (PT) expansion ratio. However, engine operation speed has little impact on the fuel savings obtained by turbocompounding. The interaction mechanism between the PT recovery power and engine pumping loss is presented in the paper. Due to the nonlinear characteristic of turbine power, there is an optimum value of PT expansion ratio to achieve largest power gain. At the end, the fuel saving potential of high performance turbocompound engine and the requirements for it are proposed in the paper. - Highlights: • An analytical model for turbocompound engine is developed and validated. • Parametric study is performed to obtain lowest BSFC and optimal expansion ratio. • The influences of each parameter on the fuel saving potentials are presented. • The impact mechanisms of each parameter on the energy tradeoff are disclosed. • It provides an effective tool to guide the preliminary design of turbocompounding.

  16. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  17. Comparison of the effectiveness of analytical wake models for wind farm with constant and variable hub heights

    International Nuclear Information System (INIS)

    Wang, Longyan; Tan, Andy C.C.; Cholette, Michael; Gu, Yuantong

    2016-01-01

    Highlights: • The effectiveness of three analytical wake models is studied. • The results of the analytical wake models are compared with the CFD simulations. • The results of CFD simulation are verified by comparison to the offshore wind farm observation data. • The onshore wind farm with both constant and different hub height turbines are analyzed. • PARK model is able to predict the total wind farm power production well with tuned surface roughness value. - Abstract: Extensive power losses of wind farm have been witnessed due to the wake interactions between wind turbines. By applying analytical wake models which describe the wind speed deficits in the wake quantitatively, the power losses can be regained to a large extent through wind farm layout optimization, and this has been extensively reported in literature. Nevertheless, the effectiveness of the analytical wake models in predicting the wind farm power production have rarely been studied and compared for wind farm with both constant and variable wind turbine hub heights. In this study, the effectiveness of three different analytical wake models (PARK model, Larsen model and B-P model) is thoroughly compared over a wide range of wake properties. After the validation with the observation data from offshore wind farm, CFD simulations are used to verify the effectiveness of the analytical wake models for an onshore wind farm. The results show that when using the PARK model the surface roughness value (z 0 ) must be carefully tuned to achieve good performance in predicting the wind farm power production. For the other two analytical wake models, their effectiveness varies depending on the situation of wind farm (offshore or onshore) and the wind turbine hub heights (constant or variable). It was found that the results of B-P model agree well with the CFD simulations for offshore wind farm, but not for the onshore wind farm. The Larsen model is more accurate for the wind farm with variable wind turbine

  18. Validation of an analytical methodology for the quantitative analysis of petroleum hydrocarbons in marine sediment samples

    Directory of Open Access Journals (Sweden)

    Eloy Yordad Companioni Damas

    2009-01-01

    Full Text Available This work describes a validation of an analytical procedure for the analysis of petroleum hydrocarbons in marine sediment samples. The proposed protocol is able to measure n-alkanes and polycyclic aromatic hydrocarbons (PAH in samples at concentrations as low as 30 ng/g, with a precision better than 15% for most of analytes. The extraction efficiency of fortified sediments varied from 65.1 to 105.6% and 59.7 to 97.8%, for n-alkanes and PAH in the ranges: C16 - C32 and fluoranthene - benzo(apyrene, respectively. The analytical protocol was applied to determine petroleum hydrocarbons in sediments collected from a marine coastal zone.

  19. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  20. The analytical calibration model of temperature effects on a silicon piezoresistive pressure sensor

    Directory of Open Access Journals (Sweden)

    Meng Nie

    2017-03-01

    Full Text Available Presently, piezoresistive pressure sensors are highly demanded for using in various microelectronic devices. The electrical behavior of these pressure sensor is mainly dependent on the temperature gradient. In this paper, various factors,which includes effect of temperature, doping concentration on the pressure sensitive resistance, package stress, and temperature on the Young’s modulus etc., are responsible for the temperature drift of the pressure sensor are analyzed. Based on the above analysis, an analytical calibration model of the output voltage of the sensor is proposed and the experimental data is validated through a suitable model.

  1. Statistically qualified neuro-analytic failure detection method and system

    Science.gov (United States)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  2. Applied to neuro-fuzzy models for signal validation in Angra 1 nuclear power plant

    International Nuclear Information System (INIS)

    Oliveira, Mauro Vitor de

    1999-06-01

    This work develops two models of signal validation in which the analytical redundancy of the monitored signals from an industrial plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire region of operation in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  3. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  4. Validation of the analytical method for sodium dichloroisocyanurate aimed at drinking water disinfection

    International Nuclear Information System (INIS)

    Martinez Alvarez, Luis Octavio; Alejo Cisneros, Pedro; Garcia Pereira, Reynaldo; Campos Valdez, Doraily

    2014-01-01

    Cuba has developed the first effervescent 3.5 mg sodium dichloroisocyanurate tablets as a non-therapeutic active principle. This ingredient releases certain amount of chlorine when dissolved into a litre of water and it can cause adequate disinfection of drinking water ready to be taken after 30 min. Developing and validating an analytical iodometric method applicable to the quality control of effervescent 3.5 mg sodium dichloroisocyanurate tablets

  5. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  6. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  7. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    Science.gov (United States)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  8. The development of a decision analytic model of changes in mean deviation in people with glaucoma: the COA model.

    Science.gov (United States)

    Kymes, Steven M; Lambert, Dennis L; Lee, Paul P; Musch, David C; Siegfried, Carla J; Kotak, Sameer V; Stwalley, Dustin L; Fain, Joel; Johnson, Chris; Gordon, Mae O

    2012-07-01

    To create and validate a statistical model predicting progression of primary open-angle glaucoma (POAG) assessed by loss of visual field as measured in mean deviation (MD) using 3 landmark studies of glaucoma progression and treatment. A Markov decision analytic model using patient level data described longitudinal MD changes over 7 years. Patient-level data from the Collaborative Initial Glaucoma Treatment Study (n = 607), the Ocular Hypertension Treatment Study (OHTS; n = 148; only those who developed POAG in the first 5 years of OHTS) and Advanced Glaucoma Intervention Study (n = 591), the COA model. We developed a Markov model with transition matrices stratified by current MD, age, race, and intraocular pressure categories and used a microsimulation approach to estimate change in MD over 7 years. Internal validation compared model prediction for 7 years to actual MD for COA participants. External validation used a cohort of glaucoma patients drawn from university clinical practices. Change in visual field as measured in MD in decibels (dB). Regressing the actual MD against the predicted produced an R(2) of 0.68 for the right eye and 0.63 for the left. The model predicted ending MD for right eyes of 65% of participants and for 63% of left eyes within 3 dB of actual results at 7 years. In external validation the model had an R(2) of 0.79 in the right eye and 0.77 in the left at 5 years. The COA model is a validated tool for clinicians, patients, and health policy makers seeking to understand longitudinal changes in MD in people with glaucoma. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  9. Critical evaluation of analytical models for stochastic heating in dual-frequency capacitive discharges

    International Nuclear Information System (INIS)

    Sharma, S; Turner, M M

    2013-01-01

    Dual-frequency capacitive discharges are widespread in the semiconductor industry and are used, for example, in etching of semiconductor materials to manufacture microchips. In low-pressure dual radio-frequency capacitive discharges, stochastic heating is an important phenomenon. Recent theoretical work on this problem using several different approaches has produced results that are broadly in agreement insofar as scaling with the discharge parameters is concerned, but there remains some disagreement in detail concerning the absolute size of the effect for the case of dual-frequency capacitive discharges. In this work, we investigate the dependence of stochastic heating on various discharge parameters with the help of particle-in-cell (PIC) simulation. The dual-frequency analytical models are in fair agreement with PIC results for values of the low-frequency current density amplitude J lf (or dimensionless control parameter H lf ∼ 5) typical of many modern experiments. However, for higher values of J lf (or higher H lf ), new physical phenomena (like field reversal, reflection of ions, etc) appear and the simulation results deviate from existing dual-frequency analytical models. On the other hand, for lower J lf (or lower H lf ) again the simulation results deviate from analytical models. So this research work produces a relatively extensive set of simulation data that may be used to validate theories over a wide range of parameters. (paper)

  10. Validation of multivariate classification methods using analytical fingerprints – concept and case study on organic feed for laying hens

    NARCIS (Netherlands)

    Alewijn, Martin; Voet, van der Hilko; Ruth, van Saskia

    2016-01-01

    Multivariate classification methods based on analytical fingerprints have found many applications in the food and feed area, but practical applications are still scarce due to a lack of a generally accepted validation procedure. This paper proposes a new approach for validation of this type of

  11. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  12. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    Energy Technology Data Exchange (ETDEWEB)

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  13. Modeling and analytical simulation of a smouldering carbonaceous ...

    African Journals Online (AJOL)

    Modeling and analytical simulation of a smouldering carbonaceous rod. A.A. Mohammed, R.O. Olayiwola, M Eseyin, A.A. Wachin. Abstract. Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. This paper presents an analytical method ...

  14. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  15. Analytical Model for the End-Bearing Capacity of Tapered Piles Using Cavity Expansion Theory

    Directory of Open Access Journals (Sweden)

    Suman Manandhar

    2012-01-01

    Full Text Available On the basis of evidence from model tests on increasing the end-bearing behavior of tapered piles at the load-settlement curve, this paper proposes an analytical spherical cavity expansion theory to evaluate the end-bearing capacity. The angle of tapering is inserted in the proposed model to evaluate the end-bearing capacity. The test results of the proposed model in different types of sands and different relative densities show good effects compared to conventional straight piles. The end-bearing capacity increases with increases in the tapering angle. The paper then propounds a model for prototypes and real-type pile tests which predicts and validates to evaluate the end-bearing capacity.

  16. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    Science.gov (United States)

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  17. A hidden analytic structure of the Rabi model

    International Nuclear Information System (INIS)

    Moroz, Alexander

    2014-01-01

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω c and a two-level system with a resonance frequency ω 0 . It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω 0 /(2ω c )=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads to non-classical discrete orthogonal polynomials ϕ k (ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ N (ϵ) of at least the degree N=n+n t . The value of n t >0, which is slowly increasing with n, depends on the required precision. For instance, n t ≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x nl ’s of ϕ n (ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model. •The spectrum is the lattice of discrete

  18. Use of reference materials for validating analytical methods. Applied to the determination of As, Co, Na, Hg, Se and Fe using neutron activation analysis

    International Nuclear Information System (INIS)

    Munoz, L; Andonie, O; Kohnenkamp, I

    2000-01-01

    The main purpose of an analytical laboratory is to provide reliable information on the nature and composition of the materials submitted for analysis. This purpose can only be attained if analytical methodologies that have the attributes of accuracy, precision, specificity and sensitivity, among others, are used. The process by which these attributes are evaluated is called validation of the analytical method. The Chilean Nuclear Energy Commission's Neutron Activation Analysis Laboratory is applying a quality guarantee program to ensure the quality of its analytical results, which aims, as well, to attain accreditation for some of its measurements. Validation of the analytical methodologies used is an essential part of applying this program. There are many forms of validation, from comparison with reference techniques to participation in inter-comparison rounds. Certified reference materials were used in this work in order to validate the application of neutron activation analysis in determining As, Co, Na, Hg, Se and Fe in shellfish samples. The use of reference materials was chosen because it is a simple option that easily detects sources of systematic errors. Neutron activation analysis is an instrumental analytical method that does not need chemical treatment and that is based on processes which take place in the nuclei of atoms, making the matrix effects unimportant and different biological reference materials can be used. The following certified reference materials were used for validating the method used: BCR human hair 397, NRCC dogfish muscle DORM-2, NRCC -dogfish liver DOLT-2, NIST - oyster tissue 1566, NIES - mussel 6 and BCR - tuna fish 464. The reference materials were analyzed using the procedure developed for the shellfish samples and the above-mentioned elements were determined. With the results obtained, the parameters of accuracy, precision, detection limit, quantification limit and uncertainty associated with the method were determined for each

  19. Analytical solutions by squeezing to the anisotropic Rabi model in the nonperturbative deep-strong-coupling regime

    Science.gov (United States)

    Zhang, Yu-Yu; Chen, Xiang-You

    2017-12-01

    An unexplored nonperturbative deep strong coupling (npDSC) achieved in superconducting circuits has been studied in the anisotropic Rabi model by the generalized squeezing rotating-wave approximation. Energy levels are evaluated analytically from the reformulated Hamiltonian and agree well with numerical ones in a wide range of coupling strength. Such improvement ascribes to deformation effects in the displaced-squeezed state presented by the squeezed momentum variance, which are omitted in previous displaced states. The atom population dynamics confirms the validity of our approach for the npDSC strength. Our approach offers the possibility to explore interesting phenomena analytically in the npDSC regime in qubit-oscillator experiments.

  20. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    Science.gov (United States)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  1. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    Directory of Open Access Journals (Sweden)

    Yimin Tan

    2018-05-01

    Full Text Available Halbach permanent magnet (PM array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  2. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    Science.gov (United States)

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  3. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    Science.gov (United States)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  4. Analytical models for low-power rectenna design

    NARCIS (Netherlands)

    Akkermans, J.A.G.; Beurden, van M.C.; Doodeman, G.J.N.; Visser, H.J.

    2005-01-01

    The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed.

  5. Analytically solvable models of reaction-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Zemskov, E P; Kassner, K [Institut fuer Theoretische Physik, Otto-von-Guericke-Universitaet, Universitaetsplatz 2, 39106 Magdeburg (Germany)

    2004-05-01

    We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

  6. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    Science.gov (United States)

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  7. Analytical method validation for quality control and the study of the 50 mg Propylthiouracil stability

    International Nuclear Information System (INIS)

    Valdes Bendoyro, Maria Olga; Garcia Penna, Caridad Margarita; Fernandez, Juan Lugones; Garcia Borges, Lisandra; Martinez Espinosa, Vivian

    2010-01-01

    A high-performance liquid chromatography analytical method was developed and validated for the quality control and stability studies of 50 mg Propylthiouracil tablets. Method is based in active principle separation through a 100 RP-18 RP-18 (5 μm) (250 x 4 mm) Lichrospher chromatography with UV detection to 272 nm, using a mobile phase composed by a ungaseous mixture of a 0.025 M buffer solution-monobasic potassium phosphate to pH= 4,6 ad acetonitrile in a 80:20 ratio with a flux speed of 0,5 mL/min. Analytical method was linear, precise, specific and exact in the study concentrations interval

  8. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  9. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  10. A semi-analytical beam model for the vibration of railway tracks

    Science.gov (United States)

    Kostovasilis, D.; Thompson, D. J.; Hussein, M. F. M.

    2017-04-01

    The high frequency dynamic behaviour of railway tracks, in both vertical and lateral directions, strongly affects the generation of rolling noise as well as other phenomena such as rail corrugation. An improved semi-analytical model of a beam on an elastic foundation is introduced that accounts for the coupling of the vertical and lateral vibration. The model includes the effects of cross-section asymmetry, shear deformation, rotational inertia and restrained warping. Consideration is given to the fact that the loads at the rail head, as well as those exerted by the railpads at the rail foot, may not act through the centroid of the section. The response is evaluated for a harmonic load and the solution is obtained in the wavenumber domain. Results are presented as dispersion curves for free and supported rails and are validated with the aid of a Finite Element (FE) and a waveguide finite element (WFE) model. Closed form expressions are derived for the forced response, and validated against the WFE model. Track mobilities and decay rates are presented to assess the potential implications for rolling noise and the influence of the various sources of vertical-lateral coupling. Comparison is also made with measured data. Overall, the model presented performs very well, especially for the lateral vibration, although it does not contain the high frequency cross-section deformation modes. The most significant effects on the response are shown to be the inclusion of torsion and foundation eccentricity, which mainly affect the lateral response.

  11. Analytic modeling, simulation and interpretation of broadband beam coupling impedance bench measurements

    Energy Technology Data Exchange (ETDEWEB)

    Niedermayer, U., E-mail: niedermayer@temf.tu-darmstadt.de [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Eidam, L. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Boine-Frankenheim, O. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); GSI Helmholzzentrum für Schwerionenforschung, Planckstraße 1, 64291 Darmstadt (Germany)

    2015-03-11

    First, a generalized theoretical approach towards beam coupling impedances and stretched-wire measurements is introduced. Applied to a circular symmetric setup, this approach allows to compare beam and wire impedances. The conversion formulas for TEM scattering parameters from measurements to impedances are thoroughly analyzed and compared to the analytical beam impedance solution. A proof of validity for the distributed impedance formula is given. The interaction of the beam or the TEM wave with dispersive material such as ferrite is discussed. The dependence of the obtained beam impedance on the relativistic velocity β is investigated and found as material property dependent. Second, numerical simulations of wakefields and scattering parameters are compared. The applicability of scattering parameter conversion formulas for finite device length is investigated. Laboratory measurement results for a circularly symmetric test setup, i.e. a ferrite ring, are shown and compared to analytic and numeric models. The optimization of the measurement process and error reduction strategies are discussed.

  12. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  13. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    Science.gov (United States)

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used

  14. Analytical simulation of the cantilever-type energy harvester

    Directory of Open Access Journals (Sweden)

    Jie Mei

    2016-01-01

    Full Text Available This article describes an analytical model of the cantilever-type energy harvester based on Euler–Bernoulli’s beam theory. Starting from the Hamiltonian form of total energy equation, the bending mode shapes and electromechanical dynamic equations are derived. By solving the constitutive electromechanical dynamic equation, the frequency transfer function of output voltage and power can be obtained. Through a case study of a unimorph piezoelectric energy harvester, this analytical modeling method has been validated by the finite element method.

  15. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    Science.gov (United States)

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  16. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  17. 33 CFR 385.33 - Revisions to models and analytical tools.

    Science.gov (United States)

    2010-07-01

    ... on a case-by-case basis what documentation is appropriate for revisions to models and analytic tools... analytical tools. 385.33 Section 385.33 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE... Incorporating New Information Into the Plan § 385.33 Revisions to models and analytical tools. (a) In carrying...

  18. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  19. An analytical model of iceberg drift

    Science.gov (United States)

    Eisenman, I.; Wagner, T. J. W.; Dell, R.

    2017-12-01

    Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.

  20. Analytical modeling of pressure transient behavior for coalbed methane transport in anisotropic media

    International Nuclear Information System (INIS)

    Wang, Lei; Wang, Xiaodong

    2014-01-01

    Resulting from the nature of anisotropy of coal media, it is a meaningful work to evaluate pressure transient behavior and flow characteristics within coals. In this article, a complete analytical model called the elliptical flow model is established by combining the theory of elliptical flow in anisotropic media and Fick's laws about the diffusion of coalbed methane. To investigate pressure transient behavior, analytical solutions were first obtained through introducing a series of special functions (Mathieu functions), which are extremely complex and are hard to calculate. Thus, a computer program was developed to establish type curves, on which the effects of the parameters, including anisotropy coefficient, storage coefficient, transfer coefficient and rate constant, were analyzed in detail. Calculative results show that the existence of anisotropy would cause great pressure depletion. To validate new analytical solutions, previous results were used to compare with the new results. It is found that a better agreement between the solutions obtained in this work and the literature was achieved. Finally, a case study is used to explain the effects of the parameters, including rock total compressibility coefficient, coal medium porosity and anisotropic permeability, sorption time constant, Langmuir volume and fluid viscosity, on bottom-hole pressure behavior. It is necessary to coordinate these parameters so as to reduce the pressure depletion. (paper)

  1. Finite element and analytical models for twisted and coiled actuator

    Science.gov (United States)

    Tang, Xintian; Liu, Yingxiang; Li, Kai; Chen, Weishan; Zhao, Jianguo

    2018-01-01

    Twisted and coiled actuator (TCA) is a class of recently discovered artificial muscle, which is usually made by twisting and coiling polymer fibers into spring-like structures. It has been widely studied since discovery due to its impressive output characteristics and bright prospects. However, its mathematical models describing the actuation in response to the temperature are still not fully developed. It is known that the large tensile stroke is resulted from the untwisting of the twisted fiber when heated. Thus, the recovered torque during untwisting is a key parameter in the mathematical model. This paper presents a simplified model for the recovered torque of TCA. Finite element method is used for evaluating the thermal stress of the twisted fiber. Based on the results of the finite element analyses, the constitutive equations of twisted fibers are simplified to develop an analytic model of the recovered torque. Finally, the model of the recovered torque is used to predict the deformation of TCA under varying temperatures and validated against experimental results. This work will enhance our understanding of the deformation mechanism of TCAs, which will pave the way for the closed-loop position control.

  2. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  3. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  4. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    Science.gov (United States)

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  5. Validation of analytical method for quality control of B12 Vitamin-10 000 injection

    International Nuclear Information System (INIS)

    Botet Garcia, Martha; Garcia Penna, Caridad Margarita; Troche Concepcion, Yenilen; Cannizares Arencibia, Yanara; Moreno Correoso, Barbara

    2009-01-01

    Analytical method reported by USA Pharmacopeia was validated for quality control of injectable B 1 2 Vitamin (10 000 U) by UV spectrophotometry because this is a simpler and low-cost method allowing quality control of finished product. Calibration curve was graphed at 60 to 140% interval, where it was linear with a correlation coefficient similar to 0, 9999; statistical test for interception and slope was considered non-significant. There was a recovery of 99.7 % in study concentrations interval where the Cochran (G) and Student(t) test were not significant too. Variation coefficient in repetition study was similar to 0.59 % for the 6 assayed replies, whereas in intermediate precision analysis, the Fisher and Student tests were not significant. Analytical method was linear, precise, specific and exact in study concentrations interval

  6. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    Science.gov (United States)

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  7. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  8. Elliptic-cylindrical analytical flux-rope model for ICMEs

    Science.gov (United States)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  9. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  10. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  11. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  12. Analytic nearest neighbour model for FCC metals

    International Nuclear Information System (INIS)

    Idiodi, J.O.A.; Garba, E.J.D.; Akinlade, O.

    1991-06-01

    A recently proposed analytic nearest-neighbour model for fcc metals is criticised and two alternative nearest-neighbour models derived from the separable potential method (SPM) are recommended. Results for copper and aluminium illustrate the utility of the recommended models. (author). 20 refs, 5 tabs

  13. Analytical eigenstates for the quantum Rabi model

    International Nuclear Information System (INIS)

    Zhong, Honghua; Xie, Qiongtao; Lee, Chaohong; Batchelor, Murray T

    2013-01-01

    We develop a method to find analytical solutions for the eigenstates of the quantum Rabi model. These include symmetric, anti-symmetric and asymmetric analytic solutions given in terms of the confluent Heun functions. Both regular and exceptional solutions are given in a unified form. In addition, the analytic conditions for determining the energy spectrum are obtained. Our results show that conditions proposed by Braak (2011 Phys. Rev. Lett. 107 100401) are a type of sufficiency condition for determining the regular solutions. The well-known Judd isolated exact solutions appear naturally as truncations of the confluent Heun functions. (paper)

  14. Analytical Models Development of Compact Monopole Vortex Flows

    Directory of Open Access Journals (Sweden)

    Pavlo V. Lukianov

    2017-09-01

    Conclusions. The article contains series of the latest analytical models that describe both laminar and turbulent dynamics of monopole vortex flows which have not been reflected in traditional publications up to the present. The further research must be directed to search of analytical models for the coherent vortical structures in flows of viscous fluids, particularly near curved surfaces, where known in hydromechanics “wall law” is disturbed and heat and mass transfer anomalies take place.

  15. Oxcarbazepine: validation and application of an analytical method

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rezende Enéas

    2010-06-01

    Full Text Available Oxcarbazepine (OXC is an important anticonvulsant and mood stabilizing drug. A pharmacopoeial monograph for OXC is not yet available and therefore the development and validation of a new analytical method for quantification of this drug is essential. In the present study, a UV spectrophotometric method for the determination of OXC was developed. The various parameters, such as linearity, precision, accuracy and specificity, were studied according to International Conference on Harmonization Guidelines. Batches of 150 mg OXC capsules were prepared and analyzed using the validated UV method. The formulations were also evaluated for parameters including drug-excipient compatibility, flowability, uniformity of weight, disintegration time, assay, uniformity of content and the amount of drug dissolved during the first hour.Oxcarbazepina (OXC é um fármaco anticonvulsivante e estabilizante do humor. O desenvolvimento e validação de método analítico para quantificação da OXC são de fundamental importância devido à ausência de monografias farmacopéicas oficiais para esse fármaco. Nesse trabalho, um método espectrofotométrico UV para determinação da OXC foi desenvolvido. O método proposto foi validado seguindo os parâmetros de linearidade, precisão, exatidão e especificidade de acordo com as normas da Conferência Internacional de Harmonização. Cápsulas de OXC 150 mg foram preparadas e analisadas utilizando-se o método analítico validado. As formulações foram avaliadas com relação à compatibilidade fármaco-excipientes, fluidez, determinação de peso, tempo de desintegração, doseamento, uniformidade de conteúdo e quantidade do fármaco dissolvido após 60 minutos.

  16. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  17. Analytical Modeling Approach to Study Harmonic Mitigation in AC Grids with Active Impedance at Selective Frequencies

    Directory of Open Access Journals (Sweden)

    Gonzalo Abad

    2018-05-01

    Full Text Available This paper presents an analytical model, oriented to study harmonic mitigation aspects in AC grids. As it is well known, the presence of non-desired harmonics in AC grids can be palliated in several manners. However, in this paper, a power electronic-based active impedance at selective frequencies (ACISEF is used, due to its already proven flexibility and adaptability to the changing characteristics of AC grids. Hence, the proposed analytical model approach is specially conceived to globally consider both the model of the AC grid itself with its electric equivalent impedances, together with the power electronic-based ACISEF, including its control loops. In addition, the proposed analytical model presents practical and useful properties, as it is simple to understand and simple to use, it has low computational cost and simple adaptability to different scenarios of AC grids, and it provides an accurate enough representation of the reality. The benefits of using the proposed analytical model are shown in this paper through some examples of its usefulness, including an analysis of stability and the identification of sources of instability for a robust design, an analysis of effectiveness in harmonic mitigation, an analysis to assist in the choice of the most suitable active impedance under a given state of the AC grid, an analysis of the interaction between different compensators, and so on. To conclude, experimental validation of a 2.15 kA ACISEF in a real 33 kV AC grid is provided, in which real users (household and industry loads and crucial elements such as wind parks and HVDC systems are near inter-connected.

  18. Dosimetric validation of the anisotropic analytical algorithm for photon dose calculation: fundamental characterization in water

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Cozzi, Luca

    2006-01-01

    In July 2005 a new algorithm was released by Varian Medical Systems for the Eclipse planning system and installed in our institute. It is the anisotropic analytical algorithm (AAA) for photon dose calculations, a convolution/superposition model for the first time implemented in a Varian planning system. It was therefore necessary to perform validation studies at different levels with a wide investigation approach. To validate the basic performances of the AAA, a detailed analysis of data computed by the AAA configuration algorithm was carried out and data were compared against measurements. To better appraise the performance of AAA and the capability of its configuration to tailor machine-specific characteristics, data obtained from the pencil beam convolution (PBC) algorithm implemented in Eclipse were also added in the comparison. Since the purpose of the paper is to address the basic performances of the AAA and of its configuration procedures, only data relative to measurements in water will be reported. Validation was carried out for three beams: 6 MV and 15 MV from a Clinac 2100C/D and 6 MV from a Clinac 6EX. Generally AAA calculations reproduced very well measured data, and small deviations were observed, on average, for all the quantities investigated for open and wedged fields. In particular, percentage depth-dose curves showed on average differences between calculation and measurement smaller than 1% or 1 mm, and computed profiles in the flattened region matched measurements with deviations smaller than 1% for all beams, field sizes, depths and wedges. Percentage differences in output factors were observed as small as 1% on average (with a range smaller than ±2%) for all conditions. Additional tests were carried out for enhanced dynamic wedges with results comparable to previous results. The basic dosimetric validation of the AAA was therefore considered satisfactory

  19. Analytical models for the rewetting of hot surfaces

    International Nuclear Information System (INIS)

    Olek, S.

    1988-10-01

    Some aspects concerning analytical models for the rewetting of hot surface are discussed. These include the problems with applying various forms of boundary conditions, compatibility of boundary conditions with the physics of the rewetting problems, recent analytical models, the use of the separation of variables method versus the Wiener-Hopf technique, and the use of transformations. The report includes an updated list of rewetting models as well as benchmark solutions in tabular form for several models. It should be emphasized that this report is not meant to cover the topic of rewetting models. It merely discusses some points which are less commonly referred to in the literature. 93 refs., 3 figs., 22 tabs

  20. Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds

    Science.gov (United States)

    Akyurek, Bengu Ozge

    Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.

  1. 2D Analytical Modeling of Magnetic Vector Potential in Surface Mounted and Surface Inset Permanent Magnet Machines

    Directory of Open Access Journals (Sweden)

    A. Jabbari

    2017-12-01

    Full Text Available A 2D analytical method for magnetic vector potential calculation in inner rotor surface mounted and surface inset permanent magnet machines considering slotting effects, magnetization orientation and winding layout has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in quasi- Cartesian coordinate by using sub-domain method and hyperbolic functions. The developed method is applied on the performance computation of two prototypes surface mounted permanent magnet motors and two prototypes surface inset permanent magnet motors. A radial and a parallel magnetization orientation is considered for each type of motor. The results of these models are validated through FEM method.

  2. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  3. An analytical threshold voltage model for a short-channel dual-metal-gate (DMG) recessed-source/drain (Re-S/D) SOI MOSFET

    Science.gov (United States)

    Saramekala, G. K.; Santra, Abirmoya; Dubey, Sarvesh; Jit, Satyabrata; Tiwari, Pramod Kumar

    2013-08-01

    In this paper, an analytical short-channel threshold voltage model is presented for a dual-metal-gate (DMG) fully depleted recessed source/drain (Re-S/D) SOI MOSFET. For the first time, the advantages of recessed source/drain (Re-S/D) and of dual-metal-gate structure are incorporated simultaneously in a fully depleted SOI MOSFET. The analytical surface potential model at Si-channel/SiO2 interface and Si-channel/buried-oxide (BOX) interface have been developed by solving the 2-D Poisson’s equation in the channel region with appropriate boundary conditions assuming parabolic potential profile in the transverse direction of the channel. Thereupon, a threshold voltage model is derived from the minimum surface potential in the channel. The developed model is analyzed extensively for a variety of device parameters like the oxide and silicon channel thicknesses, thickness of source/drain extension in the BOX, control and screen gate length ratio. The validity of the present 2D analytical model is verified with ATLAS™, a 2D device simulator from SILVACO Inc.

  4. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  5. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    Science.gov (United States)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  6. Analytical modeling of worldwide medical radiation use

    International Nuclear Information System (INIS)

    Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.

    1987-01-01

    An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually

  7. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  8. Manufacturing data analytics using a virtual factory representation.

    Science.gov (United States)

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  9. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model

    Science.gov (United States)

    Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy

    2010-01-01

    Objective To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Design Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Measurements Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Results Two of the seven factors, ‘organizational motivation’ and ‘meeting user needs,’ were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. Limitations The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. Conclusion The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term. PMID:20962135

  10. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  11. Development, validation and evaluation of an analytical method for the determination of monomeric and oligomeric procyanidins in apple extracts.

    Science.gov (United States)

    Hollands, Wendy J; Voorspoels, Stefan; Jacobs, Griet; Aaby, Kjersti; Meisland, Ane; Garcia-Villalba, Rocio; Tomas-Barberan, Francisco; Piskula, Mariusz K; Mawson, Deborah; Vovk, Irena; Needs, Paul W; Kroon, Paul A

    2017-04-28

    There is a lack of data for individual oligomeric procyanidins in apples and apple extracts. Our aim was to develop, validate and evaluate an analytical method for the separation, identification and quantification of monomeric and oligomeric flavanols in apple extracts. To achieve this, we prepared two types of flavanol extracts from freeze-dried apples; one was an epicatechin-rich extract containing ∼30% (w/w) monomeric (-)-epicatechin which also contained oligomeric procyanidins (Extract A), the second was an oligomeric procyanidin-rich extract depleted of epicatechin (Extract B). The parameters considered for method optimisation were HPLC columns and conditions, sample heating, mass of extract and dilution volumes. The performance characteristics considered for method validation included standard linearity, method sensitivity, precision and trueness. Eight laboratories participated in the method evaluation. Chromatographic separation of the analytes was best achieved utilizing a Hilic column with a binary mobile phase consisting of acidic acetonitrile and acidic aqueous methanol. The final method showed linearity for epicatechin in the range 5-100μg/mL with a correlation co-efficient >0.999. Intra-day and inter-day precision of the analytes ranged from 2 to 6% and 2 to 13% respectively. Up to dp3, trueness of the method was >95% but decreased with increasing dp. Within laboratory precision showed RSD values <5 and 10% for monomers and oligomers, respectively. Between laboratory precision was 4 and 15% (Extract A) and 7 and 30% (Extract B) for monomers and oligomers, respectively. An analytical method for the separation, identification and quantification of procyanidins in an apple extract was developed, validated and assessed. The results of the inter-laboratory evaluation indicate that the method is reliable and reproducible. Copyright © 2017. Published by Elsevier B.V.

  12. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  13. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    Science.gov (United States)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  14. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  15. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  16. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Directory of Open Access Journals (Sweden)

    Enrico Prenesti

    2012-01-01

    Full Text Available A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria. Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture, ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing.

  17. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  18. Proposal and experimental validation of analytical models for seismic and vibration isolation devices in nuclear and non-nuclear facilities

    International Nuclear Information System (INIS)

    Serino, G.; Bonacina, G.; Bettinali, F.

    1993-01-01

    Two analytical-experimental models of HDLRBs having different levels of approximations are presented. Comparison with available experimental data shows that a non-linear hysteretic model, defined by three rubber parameters only, allows a very good complete simulation of the dynamic behavior of the isolation devices. A simpler equivalent linear viscous model reproduces less exactly the experimental behavior, but permits a good prediction of peak response values in the earthquake analysis of an isolated structure, if bearing stiffness and damping parameters are properly selected. The models have been used in preliminary design and subsequent check of the isolation system of two different types of Gas-Insulated Electric Substations (GIS), in view of possible future installation of isolated GISes in areas of high seismic risk. (author)

  19. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  20. Approximate, analytic solutions of the Bethe equation for charged particle range

    OpenAIRE

    Swift, Damian C.; McNaney, James M.

    2009-01-01

    By either performing a Taylor expansion or making a polynomial approximation, the Bethe equation for charged particle stopping power in matter can be integrated analytically to obtain the range of charged particles in the continuous deceleration approximation. Ranges match reference data to the expected accuracy of the Bethe model. In the non-relativistic limit, the energy deposition rate was also found analytically. The analytic relations can be used to complement and validate numerical solu...

  1. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  2. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  3. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  4. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  5. Predictive analytics and child protection: constraints and opportunities.

    Science.gov (United States)

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  7. Analytical models of lateral power devices with arbitrary vertical doping profiles in the drift region

    International Nuclear Information System (INIS)

    Hua Ting-Ting; Guo Yu-Feng; Yu Ying; Jian Tong; Yao Jia-Fei; Sheu Gene

    2013-01-01

    By solving the 2D Poisson's equation, analytical models are proposed to calculate the surface potential and electric field distributions of lateral power devices with arbitrary vertical doping profiles. The vertical and the lateral breakdown voltages are formulized to quantify the breakdown characteristic in completely-depleted and partially-depleted cases. A new reduced surface field (RESURF) criterion which can be used in various drift doping profiles is further derived for obtaining the optimal trade-off between the breakdown voltage and the on-resistance. Based on these models and the numerical simulation, the electric field modulation mechanism and the breakdown characteristics of lateral power devices are investigated in detail for the uniform, linear, Gaussian, and some discrete doping profiles along the vertical direction in the drift region. Then, the mentioned vertical doping profiles of these devices with the same geometric parameters are optimized, and the results show that the optimal breakdown voltages and the effective drift doping concentrations of these devices are identical, which are equal to those of the uniform-doped device, respectively. The analytical results of these proposed models are in good agreement with the numerical results and the previous experimental results, confirming the validity of the models presented here. (interdisciplinary physics and related areas of science and technology)

  8. Analytical models of lateral power devices with arbitrary vertical doping profiles in the drift region

    Science.gov (United States)

    Hua, Ting-Ting; Guo, Yu-Feng; Yu, Ying; Gene, Sheu; Jian, Tong; Yao, Jia-Fei

    2013-05-01

    By solving the 2D Poisson's equation, analytical models are proposed to calculate the surface potential and electric field distributions of lateral power devices with arbitrary vertical doping profiles. The vertical and the lateral breakdown voltages are formulized to quantify the breakdown characteristic in completely-depleted and partially-depleted cases. A new reduced surface field (RESURF) criterion which can be used in various drift doping profiles is further derived for obtaining the optimal trade-off between the breakdown voltage and the on-resistance. Based on these models and the numerical simulation, the electric field modulation mechanism and the breakdown characteristics of lateral power devices are investigated in detail for the uniform, linear, Gaussian, and some discrete doping profiles along the vertical direction in the drift region. Then, the mentioned vertical doping profiles of these devices with the same geometric parameters are optimized, and the results show that the optimal breakdown voltages and the effective drift doping concentrations of these devices are identical, which are equal to those of the uniform-doped device, respectively. The analytical results of these proposed models are in good agreement with the numerical results and the previous experimental results, confirming the validity of the models presented here.

  9. Analytical support of plant specific SAMG development validation of SAMG using MELCOR 1.8.5

    International Nuclear Information System (INIS)

    Duspiva, Jiri

    2006-01-01

    They are two NPPs in operation in Czech Republic. Both of NPPs operated in CR have already implemented EOPs, developed under collaboration with the WESE. The project on SAMG development has started and follows the previous one for EOPs also with the WESE as the leading organization. Plant specific SAMGs for the Temelin as well as Dukovany NPPs are based on the WOG generic SAMGs. The analytical support of plant specific SAMGs development is performed by the NRI Rez within the validation process. Basic conditions as well as their filling by NRI Rez are focused on analyst, analytical tools and their applications. More detail description is attended to the approach of the preparation of the MELCOR code application to the evaluation of hydrogen risk, validation of recent set of hydrogen passive autocatalytic recombiners and definition of proposals to amend system of hydrogen removal. Such kind of parametric calculations will request to perform very wide set of runs. It could not be possible with the whole plant model and decoupling of such calculation with storing of mass and energy sources into the containment is only one way. The example of this decoupling for the LOCA scenario is shown. It includes seven sources - heat losses from primary and secondary circuits, fluid blowndown through cold leg break, fission products blowndown through cold leg break, fluid blowndown through break in reactor pressure vessel bottom head, fission products through break in reactor pressure vessel bottom head, melt ejection from reactor pressure vessel to cavity and gas masses and heat losses from corium in cavity. The stand alone containment analysis was tested in two configurations - with or without taking of fission products into account. Testing showed very good agreement of all calculations until lower head failure and acceptable agreement after that. Also some problematic features appeared. The stand alone test with fission product was possible only after the changes in source code

  10. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    Science.gov (United States)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  11. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  12. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    We present a design method of homogeneous trench-assisted multicore fibers (TA-MCFs) based on an analytical model utilizing an analytical expression for the mode coupling coefficient between two adjacent cores. The analytical model can also be used for crosstalk (XT) properties analysis, such as ...

  13. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  14. Theory of a spherical electrostatic probe in a continuum plasma: Analytical models

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    A simple physical model of the charge distribution surrounding a biased spherical probe in a quiescent plasma, suggested by the theory of Su and Lam, is used to rederive the probe current-voltage characteristic. The result is compared with that of a slightly different version due to Kiel and with the exact numerical results of Baum and Chapkis. It is shown that if the ratio of the probe radius to the Debye length of the plasma is greater than or of the order of unity, the model calculation is in excellent agreement with the exact results when the dimensionless probe voltage phi/sup asterisk//sub p/,=vertical-barephi/sub p//kTvertical-bar in standard notation, is greater than 10, for both thick and thin sheaths. The comparison also provides an assessment of the importance of various additional validity criteria encountered in analytical treatments of the problem

  15. Analytical evaluation of atomic form factors: Application to Rayleigh scattering

    Energy Technology Data Exchange (ETDEWEB)

    Safari, L., E-mail: laleh.safari@ist.ac.at [IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400 Klosterneuburg (Austria); Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu (Finland); Santos, J. P. [Laboratório de Instrumentação, Engenharia Biomédica e Física da Radiação (LIBPhys-UNL), Departamento de Física, Faculdade de Ciências e Tecnologia, FCT, Universidade Nova de Lisboa, 2829-516 Caparica (Portugal); Amaro, P. [Laboratório de Instrumentação, Engenharia Biomédica e Física da Radiação (LIBPhys-UNL), Departamento de Física, Faculdade de Ciências e Tecnologia, FCT, Universidade Nova de Lisboa, 2829-516 Caparica (Portugal); Physikalisches Institut, Universität Heidelberg, D-69120 Heidelberg (Germany); Jänkälä, K. [Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu (Finland); Fratini, F. [Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu (Finland); Institute of Atomic and Subatomic Physics, TU Wien, Stadionallee 2, 1020 Wien (Austria); Departamento de Física, Instituto de Ciências Exatas, Universidade Federal de Minas Gerais, 31270-901 Belo Horizonte, MG (Brazil)

    2015-05-15

    Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.

  16. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    Science.gov (United States)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  17. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  18. Analytical Model for Sensor Placement on Microprocessors

    National Research Council Canada - National Science Library

    Lee, Kyeong-Jae; Skadron, Kevin; Huang, Wei

    2005-01-01

    .... In this paper, we present an analytical model that describes the maximum temperature differential between a hot spot and a region of interest based on their distance and processor packaging information...

  19. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    Science.gov (United States)

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  20. Two-dimensional analytical model of a proton exchange membrane fuel cell

    International Nuclear Information System (INIS)

    Liu, Jia Xing; Guo, Hang; Ye, Fang; Ma, Chong Fang

    2017-01-01

    In this study, a two-dimensional full cell analytical model of a proton exchange membrane fuel cell is developed. The analytical model describes electrochemical reactions on the anode and cathode catalyst layer, reactants diffusion in the gas diffusion layer, and gases flow in the gas channel, etc. The analytical solution is derived according to the basic physical equations. The performance predicted by the model is in good agreement with the experimental data. The results show that the polarization mainly occurs in the cathode side of the proton exchange membrane fuel cell. The anodic overpotential cannot be neglected. The hydrogen and oxygen concentrations decrease along the channel flow direction. The hydrogen and oxygen concentrations in the catalyst layer decrease with the current density. As predicted by the model, concentration polarization mainly occurs in the cathode side. - Highlights: • A 2D full cell analytical model of a proton exchange membrane fuel cell is developed. • The analytical solution is deduced according to the basic equations. • The anode overpotential is not so small that it cannot be neglected. • Species concentration distributions in the fuel cell is obtained and analyzed.

  1. Adequacy and validation of an analytical method for the quantification of lead in chamomile tisanes produced in Costa Rica

    International Nuclear Information System (INIS)

    Blanco Barrantes, Jeimy

    2014-01-01

    An analytical methodology is developed and validated to quantify lead in chamomile tisanes. Lead is quantified by utilizing the technique of flame atomic absorption spectroscopy in three brands of chamomile tisanes sold in Costa Rica to determine its safety and quality based on international standards. A method of sample preparation is established through a comparison of different forms of extraction. The acid digestion extraction method has been the procedure utilized, reaching an average recovery percentage of 97,1%, with a standard deviation of 2,3%. The optimization of the chosen analytical procedure and complete validation is performed. The results obtained in the validation of the analytical procedure have shown that the interval where is generated the best calibration curve in terms of the correlation coefficient and the value of the statistically significant intercept equal to zero, have been the comprised between (0,2-3,2) μg/mL (r 2 =0,9996), corresponding to a range between 20% to 320% of the maximum allowed limit. In addition, the procedure has been adequate in terms of accuracy (average recovery percentage 101,1%) and precision under repeatability and intermediate precision (RSD max. 9,3%) and limit of quantification (0,2551 μg/mL). The safety criterion of World Health Organization (WHO) is determined with respect to the concentration of lead in the analyzed products. The 9 analyzed samples of products to prepare chamomile tisanes have stayed without evidencing concentrations of lead above the limit value of 10 μg/g suggested for medicinal herbs by WHO [es

  2. Analytical modeling of post-tensioned precast beam-to-column connections

    International Nuclear Information System (INIS)

    Kaya, Mustafa; Arslan, A. Samet

    2009-01-01

    In this study, post-tensioned precast beam-to-column connections are tested experimentally at different stress levels, and are modelled analytically using 3D nonlinear finite element modelling method. ANSYS finite element software is used for this purposes. Nonlinear static analysis is used to determine the connection strength, behavior and stiffness when subjected to cyclic inelastic loads simulating ground excitation during an earthquake. The results obtained from the analytical studies are compared with the test results. In terms of stiffness, it was seen that the initial stiffness of the analytical models was lower than that of the tested specimens. As a result, modelling of these types of connection using 3D FEM can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.

  3. An analytical model for solute transport through a GCL-based two-layered liner considering biodegradation

    Energy Technology Data Exchange (ETDEWEB)

    Guan, C. [Institute of Hydrology and Water Resources Engineering, Zhejiang University, Hangzhou 310058 (China); Xie, H.J., E-mail: xiehaijian@zju.edu.cn [Institute of Hydrology and Water Resources Engineering, Zhejiang University, Hangzhou 310058 (China); MOE Key Laboratory of Soft Soils and Geoenvironmental Engineering, Zhejiang University, Hangzhou 310058 (China); Wang, Y.Z.; Chen, Y.M.; Jiang, Y.S.; Tang, X.W. [MOE Key Laboratory of Soft Soils and Geoenvironmental Engineering, Zhejiang University, Hangzhou 310058 (China)

    2014-01-01

    An analytical model for solute advection and dispersion in a two-layered liner consisting of a geosynthetic clay liner (GCL) and a soil liner (SL) considering the effect of biodegradation was proposed. The analytical solution was derived by Laplace transformation and was validated over a range of parameters using the finite-layer method based software Pollute v7.0. Results show that if the half-life of the solute in GCL is larger than 1 year, the degradation in GCL can be neglected for solute transport in GCL/SL. When the half-life of GCL is less than 1 year, neglecting the effect of degradation in GCL on solute migration will result in a large difference of relative base concentration of GCL/SL (e.g., 32% for the case with half-life of 0.01 year). The 100-year solute base concentration can be reduced by a factor of 2.2 when the hydraulic conductivity of the SL was reduced by an order of magnitude. The 100-year base concentration was reduced by a factor of 155 when the half life of the contaminant in the SL was reduced by an order of magnitude. The effect of degradation is more important in approving the groundwater protection level than the hydraulic conductivity. The analytical solution can be used for experimental data fitting, verification of complicated numerical models and preliminary design of landfill liner systems. - Highlights: •Degradation of contaminants was considered in modeling solute transport in GCL/SL. •Analytical solutions were derived for assessment of GCL/SL with degradation. •Degradation in GCL can be ignored as half-life is larger than 1 year. •Base concentration is more sensitive to half-life of SL than to permeability of SL.

  4. An analytical model for solute transport through a GCL-based two-layered liner considering biodegradation

    International Nuclear Information System (INIS)

    Guan, C.; Xie, H.J.; Wang, Y.Z.; Chen, Y.M.; Jiang, Y.S.; Tang, X.W.

    2014-01-01

    An analytical model for solute advection and dispersion in a two-layered liner consisting of a geosynthetic clay liner (GCL) and a soil liner (SL) considering the effect of biodegradation was proposed. The analytical solution was derived by Laplace transformation and was validated over a range of parameters using the finite-layer method based software Pollute v7.0. Results show that if the half-life of the solute in GCL is larger than 1 year, the degradation in GCL can be neglected for solute transport in GCL/SL. When the half-life of GCL is less than 1 year, neglecting the effect of degradation in GCL on solute migration will result in a large difference of relative base concentration of GCL/SL (e.g., 32% for the case with half-life of 0.01 year). The 100-year solute base concentration can be reduced by a factor of 2.2 when the hydraulic conductivity of the SL was reduced by an order of magnitude. The 100-year base concentration was reduced by a factor of 155 when the half life of the contaminant in the SL was reduced by an order of magnitude. The effect of degradation is more important in approving the groundwater protection level than the hydraulic conductivity. The analytical solution can be used for experimental data fitting, verification of complicated numerical models and preliminary design of landfill liner systems. - Highlights: •Degradation of contaminants was considered in modeling solute transport in GCL/SL. •Analytical solutions were derived for assessment of GCL/SL with degradation. •Degradation in GCL can be ignored as half-life is larger than 1 year. •Base concentration is more sensitive to half-life of SL than to permeability of SL

  5. A theoretical study of CsI:Tl columnar scintillator image quality parameters by analytical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalyvas, N., E-mail: nkalyvas@teiath.gr; Valais, I.; Michail, C.; Fountos, G.; Kandarakis, I.; Cavouras, D.

    2015-04-11

    Medical X-ray digital imaging systems such as mammography, radiography and computed tomography (CT), are composed from efficient radiation detectors, which can transform the X-rays to electron signal. Scintillators are materials that emit light when excited by X-rays and incorporated in X-ray medical imaging detectors. Columnar scintillator, like CsI:T1 is very often used for X-ray detection due to its higher performance. The columnar form limits the lateral spread of the optical photons to the scintillator output, thus it demonstrates superior spatial resolution compared to granular scintillators. The aim of this work is to provide an analytical model for calculating the MTF, the DQE and the emission efficiency of a columnar scintillator. The model parameters were validated against published Monte Carlo data. The model was able to predict the overall performance of CsI:Tl scintillators and suggested an optimum thickness of 300 μm for radiography applications. - Highlights: • An analytical model for calculating MTF, DQE and Detector Optical Gain (DOG) of columnar phosphors was developed. • The model was fitted to published efficiency and MTF Monte Carlo data. • A good fit was observed for 300 µm columnar CsI:Tl thickness. • The performance of the 300 µm column thickness CsI:Tl was better in terms of MTF and DOG for radiographic applications.

  6. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  7. A fast semi-analytical model for the slotted structure of induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    A fast, semi-analytical model for induction motors (IMs) is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous distribution of the magnetic flux density in

  8. An experimentally validated bimorph cantilever model for piezoelectric energy harvesting from base excitations

    International Nuclear Information System (INIS)

    Erturk, A; Inman, D J

    2009-01-01

    Piezoelectric transduction has received great attention for vibration-to-electric energy conversion over the last five years. A typical piezoelectric energy harvester is a unimorph or a bimorph cantilever located on a vibrating host structure, to generate electrical energy from base excitations. Several authors have investigated modeling of cantilevered piezoelectric energy harvesters under base excitation. The existing mathematical modeling approaches range from elementary single-degree-of-freedom models to approximate distributed parameter solutions in the sense of Rayleigh–Ritz discretization as well as analytical solution attempts with certain simplifications. Recently, the authors have presented the closed-form analytical solution for a unimorph cantilever under base excitation based on the Euler–Bernoulli beam assumptions. In this paper, the analytical solution is applied to bimorph cantilever configurations with series and parallel connections of piezoceramic layers. The base excitation is assumed to be translation in the transverse direction with a superimposed small rotation. The closed-form steady state response expressions are obtained for harmonic excitations at arbitrary frequencies, which are then reduced to simple but accurate single-mode expressions for modal excitations. The electromechanical frequency response functions (FRFs) that relate the voltage output and vibration response to translational and rotational base accelerations are identified from the multi-mode and single-mode solutions. Experimental validation of the single-mode coupled voltage output and vibration response expressions is presented for a bimorph cantilever with a tip mass. It is observed that the closed-form single-mode FRFs obtained from the analytical solution can successfully predict the coupled system dynamics for a wide range of electrical load resistance. The performance of the bimorph device is analyzed extensively for the short circuit and open circuit resonance

  9. MOLECULAR VALIDATED MODEL FOR ADSORPTION OF PROTONATED DYE ON LDH

    Directory of Open Access Journals (Sweden)

    B. M. Braga

    Full Text Available Abstract Hydrotalcite-like compounds are anionic clays of scientific and technological interest for their use as ion exchange materials, catalysts and modified electrodes. Surface phenomenon are important for all these applications. Although conventional analytical methods have enabled progress in understanding the behavior of anionic clays in solution, an evaluation at the atomic scale of the dynamics of their ionic interactions has never been performed. Molecular simulation has become an extremely useful tool to provide this perspective. Our purpose is to validate a simplified model for the adsorption of 5-benzoyl-4-hydroxy-2-methoxy-benzenesulfonic acid (MBSA, a prototype molecule of anionic dyes, onto a hydrotalcite surface. Monte Carlo simulations were performed in the canonical ensemble with MBSA ions and a pore model of hydrotalcite using UFF and ClayFF force fields. The proposed molecular model has allowed us to reproduce experimental data of atomic force microscopy. Influences of protonation during the adsorption process are also presented.

  10. Testing a 1-D Analytical Salt Intrusion Model and the Predictive Equation in Malaysian Estuaries

    Science.gov (United States)

    Gisen, Jacqueline Isabella; Savenije, Hubert H. G.

    2013-04-01

    Little is known about the salt intrusion behaviour in Malaysian estuaries. Study on this topic sometimes requires large amounts of data especially if a 2-D or 3-D numerical models are used for analysis. In poor data environments, 1-D analytical models are more appropriate. For this reason, a fully analytical 1-D salt intrusion model, based on the theory of Savenije in 2005, was tested in three Malaysian estuaries (Bernam, Selangor and Muar) because it is simple and requires minimal data. In order to achieve that, site surveys were conducted in these estuaries during the dry season (June-August) at spring tide by moving boat technique. Data of cross-sections, water levels and salinity were collected, and then analysed with the salt intrusion model. This paper demonstrates a good fit between the simulated and observed salinity distribution for all three estuaries. Additionally, the calibrated Van der Burgh's coefficient K, Dispersion coefficient D0, and salt intrusion length L, for the estuaries also displayed a reasonable correlations with those calculated from the predictive equations. This indicates that not only is the salt intrusion model valid for the case studies in Malaysia but also the predictive model. Furthermore, the results from this study describe the current state of the estuaries with which the Malaysian water authority in Malaysia can make decisions on limiting water abstraction or dredging. Keywords: salt intrusion, Malaysian estuaries, discharge, predictive model, dispersion

  11. INCAS: an analytical model to describe displacement cascades

    Energy Technology Data Exchange (ETDEWEB)

    Jumel, Stephanie E-mail: stephanie.jumel@edf.fr; Claude Van-Duysen, Jean E-mail: jean-claude.van-duysen@edf.fr

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  12. INCAS: an analytical model to describe displacement cascades

    Science.gov (United States)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  13. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  14. Analytical method validation of GC-FID for the simultaneous measurement of hydrocarbons (C2-C4) in their gas mixture

    OpenAIRE

    Oman Zuas; Harry budiman; Muhammad Rizky Mulyana

    2016-01-01

    An accurate gas chromatography coupled to a flame ionization detector (GC-FID) method was validated for the simultaneous analysis of light hydrocarbons (C2-C4) in their gas mixture. The validation parameters were evaluated based on the ISO/IEC 17025 definition including method selectivity, repeatability, accuracy, linearity, limit of detection (LOD), limit of quantitation (LOQ), and ruggedness. Under the optimum analytical conditions, the analysis of gas mixture revealed that each target comp...

  15. Development and testing of analytical models for the pebble bed type HTRs

    International Nuclear Information System (INIS)

    Huda, M.Q.; Obara, T.

    2008-01-01

    The pebble bed type gas cooled high temperature reactor (HTR) appears to be a good candidate for the next generation nuclear reactor technology. These reactors have unique characteristics in terms of the randomness in geometry, and require special techniques to analyze their systems. This study includes activities concerning the testing of computational tools and the qualification of models. Indeed, it is essential that the validated analytical tools be available to the research community. From this viewpoint codes like MCNP, ORIGEN and RELAP5, which have been used in nuclear industry for many years, are selected to identify and develop new capabilities needed to support HTR analysis. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP. The coupled MCNP-ORIGEN code is used to estimate the burnup and the refuelling scheme. Results obtained from Monte Carlo analysis are interfaced with RELAP5 to analyze the thermal hydraulics and safety characteristics of the reactor. New models and methodologies are developed for several past and present experimental and prototypical facilities that were based on HTR pebble bed concepts. The calculated results are compared with available experimental data and theoretical evaluations showing very good agreement. The ultimate goal of the validation of the computer codes for pebble bed HTR applications is to acquire and reinforce the capability of these general purpose computer codes for performing HTR core design and optimization studies

  16. Improvement and Validation of an Aerosol Deposition Model in the GAMMA-FP, a Fission Product Analysis Module for VHTRs

    International Nuclear Information System (INIS)

    Yoon, Churl; Lim, Hong Sik

    2013-01-01

    GAMMA-FP (GAs Multicomponent Mixture Analysis-Fission Products module), consists of gaseous and aerosol fission product analysis modules. The aerosol FP module adopts a multi-component and multi-sectional aerosol analysis model that has been developed based on the MAEROS model. For the first work of FP module development, the MAEROS model has been implemented and examined against some analytic solutions and experimental data by Yoo et al. An aerosol transport model was developed and implemented in the GAMMA-FP code, and verified. In this study, the aerosol deposition model in the GAMMA-FP code was improved by adopting recent achievements, and was validated against an experimental data available. The aerosol deposition model in the GAMMA-FP code has been improved and successfully validated against the STORM SR-11 deposition test. The simulation with the improved deposition model predicted the matched results with the experimental data well. For future studies, the aerosol deposition model by flow irregularities will be implemented and validated against the TRANSAT bend effect test

  17. Improvement and Validation of an Aerosol Deposition Model in the GAMMA-FP, a Fission Product Analysis Module for VHTRs

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Churl; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    GAMMA-FP (GAs Multicomponent Mixture Analysis-Fission Products module), consists of gaseous and aerosol fission product analysis modules. The aerosol FP module adopts a multi-component and multi-sectional aerosol analysis model that has been developed based on the MAEROS model. For the first work of FP module development, the MAEROS model has been implemented and examined against some analytic solutions and experimental data by Yoo et al. An aerosol transport model was developed and implemented in the GAMMA-FP code, and verified. In this study, the aerosol deposition model in the GAMMA-FP code was improved by adopting recent achievements, and was validated against an experimental data available. The aerosol deposition model in the GAMMA-FP code has been improved and successfully validated against the STORM SR-11 deposition test. The simulation with the improved deposition model predicted the matched results with the experimental data well. For future studies, the aerosol deposition model by flow irregularities will be implemented and validated against the TRANSAT bend effect test.

  18. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  19. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  20. Coupling Numerical Methods and Analytical Models for Ducted Turbines to Evaluate Designs

    Directory of Open Access Journals (Sweden)

    Bradford Knight

    2018-04-01

    Full Text Available Hydrokinetic turbines extract energy from currents in oceans, rivers, and streams. Ducts can be used to accelerate the flow across the turbine to improve performance. The objective of this work is to couple an analytical model with a Reynolds averaged Navier–Stokes (RANS computational fluid dynamics (CFD solver to evaluate designs. An analytical model is derived for ducted turbines. A steady-state moving reference frame solver is used to analyze both the freestream and ducted turbine. A sliding mesh solver is examined for the freestream turbine. An efficient duct is introduced to accelerate the flow at the turbine. Since the turbine is optimized for operation in the freestream and not within the duct, there is a decrease in efficiency due to duct-turbine interaction. Despite the decrease in efficiency, the power extracted by the turbine is increased. The analytical model under-predicts the flow rejection from the duct that is predicted by CFD since the CFD predicts separation but the analytical model does not. Once the mass flow rate is corrected, the model can be used as a design tool to evaluate how the turbine-duct pair reduces mass flow efficiency. To better understand this phenomenon, the turbine is also analyzed within a tube with the analytical model and CFD. The analytical model shows that the duct’s mass flow efficiency reduces as a function of loading, showing that the system will be more efficient when lightly loaded. Using the conclusions of the analytical model, a more efficient ducted turbine system is designed. The turbine is pitched more heavily and the twist profile is adapted to the radial throat velocity profile.

  1. An Analytical Threshold Voltage Model of Fully Depleted (FD) Recessed-Source/Drain (Re-S/D) SOI MOSFETs with Back-Gate Control

    Science.gov (United States)

    Saramekala, Gopi Krishna; Tiwari, Pramod Kumar

    2016-10-01

    This paper presents an analytical threshold voltage model for back-gated fully depleted (FD), recessed-source drain silicon-on-insulator metal-oxide-semiconductor field-effect transistors (MOSFETs). Analytical surface potential models have been developed at front and back surfaces of the channel by solving the two-dimensional (2-D) Poisson's equation in the channel region with appropriate boundary conditions assuming a parabolic potential profile in the transverse direction of the channel. The strong inversion criterion is applied to the front surface potential as well as on the back one in order to find two separate threshold voltages for front and back channels of the device, respectively. The device threshold voltage has been assumed to be associated with the surface that offers a lower threshold voltage. The developed model was analyzed extensively for a variety of device geometry parameters like the oxide and silicon channel thicknesses, the thickness of the source/drain extension in the buried oxide, and the applied bias voltages with back-gate control. The proposed model has been validated by comparing the analytical results with numerical simulation data obtained from ATLAS™, a 2-D device simulator from SILVACO.

  2. Validation of analytical methods for the stability studies of naproxen suppositories for infant and adult use

    International Nuclear Information System (INIS)

    Rodriguez Hernandez, Yaslenis; Suarez Perez, Yania; Garcia Pulpeiro, Oscar

    2011-01-01

    Analytical and validating studies were performed in this paper, with a view to using them in the stability studies of the future formulations of naproxen suppositories for children and adults. The most influential factors in the naproxen stability were determined, that is, the major degradation occurred in acid medium, oxidative medium and by light action. One high-performance liquid chromatography-based method was evaluated, which proved to be adequate to quantify naproxen in suppositories and was selective against degradation products. The quantification limit was 3,480 μg, so it was valid for these studies. Additionally, the parameters specificity for stability, detection and quantification limits were evaluated for the direct semi-aqueous acid-base method, which was formerly validated for the quality control and showed satisfactory results. Nevertheless, the volumetric methods were not regarded as stability indicators; therefore, this method will be used along with the chromatographic methods of choice, that is, thin-layer chromatography and highperformance liquid chromatography, to determine the degradation products

  3. Development and Validation Dissolution Analytical Method of Nimesulide beta-Cyclodextrin 400 mg Tablet

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Carvalho Pereira

    2016-10-01

    Full Text Available The nimesulide (N-(4-nitro-2-phenoxyphenylmethanesulfonamide belongs to the class of non-steroidal anti-inflammatory drugs (NSAIDs and category II of the biopharmaceutical classification, The complexation of nimesulide with b-cyclodextrin is a pharmacological strategy to increase the solubility of the drug The objective of this study was to develop and validate an analytical methodology for dissolving the nimesulide beta-cyclodextrin 400 mg tablet and meets the guidelines of ANVISA for drug registration purposes. Once developed, the dissolution methodology was validated according to the RE of parameters no.  899/2003. In the development of the method it was noted that the duration of the dissolution test was 60 minutes, the volume and the most suitable dissolution medium was 900 mL of aqueous solution of sodium lauryl sulfate 1% (w/ v. It was also noted that rotation of 100 rpm and the paddle apparatus was the most appropriate to evaluate the dissolution of the drug. Spectrophotometric methodology was used to quantify the percentage of dissolved drug. The wavelength was 390 nm using the quantification. The validation of the methodology, system suitability parameters, specificity/selectivity, linearity, precision, accuracy and robustness were satisfactory and proved that the developed dissolution methodology was duly executed. DOI: http://dx.doi.org/10.17807/orbital.v8i5.827

  4. Analytic treatment of nuclear spin-lattice relaxation for diffusion in a cone model

    Science.gov (United States)

    Sitnitsky, A. E.

    2011-12-01

    We consider nuclear spin-lattice relaxation rate resulted from a diffusion equation for rotational wobbling in a cone. We show that the widespread point of view that there are no analytical expressions for correlation functions for wobbling in a cone model is invalid and prove that nuclear spin-lattice relaxation in this model is exactly tractable and amenable to full analytical description. The mechanism of relaxation is assumed to be due to dipole-dipole interaction of nuclear spins and is treated within the framework of the standard Bloemberger, Purcell, Pound-Solomon scheme. We consider the general case of arbitrary orientation of the cone axis relative the magnetic field. The BPP-Solomon scheme is shown to remain valid for systems with the distribution of the cone axes depending only on the tilt relative the magnetic field but otherwise being isotropic. We consider the case of random isotropic orientation of cone axes relative the magnetic field taking place in powders. Also we consider the cases of their predominant orientation along or opposite the magnetic field and that of their predominant orientation transverse to the magnetic field which may be relevant for, e.g., liquid crystals. Besides we treat in details the model case of the cone axis directed along the magnetic field. The latter provides direct comparison of the limiting case of our formulas with the textbook formulas for free isotropic rotational diffusion. The dependence of the spin-lattice relaxation rate on the cone half-width yields results similar to those predicted by the model-free approach.

  5. Model and Analytic Processes for Export License Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.; Wood, Thomas W.; Daly, Don S.; Brothers, Alan J.; Sanfilippo, Antonio P.; Cook, Diane; Holder, Larry

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

  6. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Directory of Open Access Journals (Sweden)

    René Felix Reinhart

    2017-02-01

    Full Text Available Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  7. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  8. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    International Nuclear Information System (INIS)

    Massoud, M.

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients

  9. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  10. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  11. Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models

    International Nuclear Information System (INIS)

    Trizac, Emmanuel; Aubouy, Miguel; Bocquet, Lyderic

    2003-01-01

    We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model

  12. Validation of models for the analysis of the transient behavior of metallic fast reactor fuel

    International Nuclear Information System (INIS)

    Kramer, J.M.; Hughes, T.H.; Gruber, E.E.

    1989-01-01

    The Integral Fast Reactor (IFR) concept being developed at Argonne National Laboratory has prompted a renewed interest in U-Pu-Zr metal alloys as a fuel for sodium-cooled fast reactors. Part of the attractiveness of the IFR concept is the improvement in reactor safety margins through inherent features of a metal-fueled LMR core. In order to demonstrate these safety margins it is necessary to have computer codes available to analyze the detailed response of metallic fuel to a wide range of accident initiators. Two of the codes that play a key role in assessing this response are the STARS fission gas behavior code and the FPIN2 fuel pin mechanics code. Verification and validation are two important components in the development of models and computer codes. Verification demonstrates through comparison of calculations with analytical solutions that the methodology and algorithms correctly solve the equations that govern the phenomena being modeled. Validation, on the other hand, demonstrates through comparison with data that the phenomena are being modeled correctly. Both components are necessary in order to have the confidence to extrapolate the calculations to reactor accident conditions. This paper presents the results of recent progress in the validation of models for the analysis of the behavior of metallic fast reactor fuel. 9 refs., 7 figs

  13. Analytical validation of an ultra low-cost mobile phone microplate reader for infectious disease testing.

    Science.gov (United States)

    Wang, Li-Ju; Naudé, Nicole; Demissie, Misganaw; Crivaro, Anne; Kamoun, Malek; Wang, Ping; Li, Lei

    2018-07-01

    Most mobile health (mHealth) diagnostic devices for laboratory tests only analyze one sample at a time, which is not suitable for large volume serology testing, especially in low-resource settings with shortage of health professionals. In this study, we developed an ultra-low-cost clinically-accurate mobile phone microplate reader (mReader), and clinically validated this optical device for 12 infectious disease tests. The mReader optically reads 96 samples on a microplate at one time. 771 de-identified patient samples were tested for 12 serology assays for bacterial/viral infections. The mReader and the clinical instrument blindly read and analyzed all tests in parallel. The analytical accuracy and the diagnostic performance of the mReader were evaluated across the clinical reportable categories by comparison with clinical laboratorial testing results. The mReader exhibited 97.59-99.90% analytical accuracy and envision the mReader can benefit underserved areas/populations and low-resource settings in rural clinics/hospitals at a low cost (~$50 USD) with clinical-level analytical quality. It has the potential to improve health access, speed up healthcare delivery, and reduce health disparities and education disparities by providing access to a low-cost spectrophotometer. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  15. Validation of an immortalized human (hBMEC) in vitro blood-brain barrier model.

    Science.gov (United States)

    Eigenmann, Daniela Elisabeth; Jähne, Evelyn Andrea; Smieško, Martin; Hamburger, Matthias; Oufir, Mouhssin

    2016-03-01

    We recently established and optimized an immortalized human in vitro blood-brain barrier (BBB) model based on the hBMEC cell line. In the present work, we validated this mono-culture 24-well model with a representative series of drug substances which are known to cross or not to cross the BBB. For each individual compound, a quantitative UHPLC-MS/MS method in Ringer HEPES buffer was developed and validated according to current regulatory guidelines, with respect to selectivity, precision, and reliability. Various biological and analytical challenges were met during method validation, highlighting the importance of careful method development. The positive controls antipyrine, caffeine, diazepam, and propranolol showed mean endothelial permeability coefficients (P e) in the range of 17-70 × 10(-6) cm/s, indicating moderate to high BBB permeability when compared to the barrier integrity marker sodium fluorescein (mean P e 3-5 × 10(-6) cm/s). The negative controls atenolol, cimetidine, and vinblastine showed mean P e values < 10 × 10(-6) cm/s, suggesting low permeability. In silico calculations were in agreement with in vitro data. With the exception of quinidine (P-glycoprotein inhibitor and substrate), BBB permeability of all control compounds was correctly predicted by this new, easy, and fast to set up human in vitro BBB model. Addition of retinoic acid and puromycin did not increase transendothelial electrical resistance (TEER) values of the BBB model.

  16. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  17. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    -fractures with flowing water and rock with porosity accessible only by diffusion. The approach furthermore assumes that the properties within the two porosity domains are averaged and also the transfer between the two domains is averaged.It is an important validation issue to verify that effective averaging of parameters can be performed and that suitable values can be derived. It can be shown that matrix interaction properties along a flow path can be integrated to an effective value and if the matrix depth can be considered as infinite, effective values may be derived also for the diffusion and sorption parameters. Thus, it is possible to derive effective parameters for sorbing radionuclides incorporating the total matrix effects along a flow path. This is strictly valid only for cases with no dispersion, but gives a good approximation as long as dispersion does not dominate the transport. FARF31 has been tested and compared with analytical solutions and other models and was found to correspond well within a wide range of input parameters. Support and documentation on how to use FARF31 are two important components to avoid calculation mistakes and obtain trustworthy results. The documentation describes handling and updates of the code. Test cases have been constructed which can be used to check updates and be used as templates. The development of the code is kept under source code control to fulfil quality assurance. The model is deemed to be well suited for performance assessments within the SKB framework

  18. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    International Nuclear Information System (INIS)

    Nieves-Chinchilla, T.; Linton, M. G.; Hidalgo, M. A.; Vourlidas, A.; Savani, N. P.; Szabo, A.; Farrugia, C.; Yu, W.

    2016-01-01

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  19. A CIRCULAR-CYLINDRICAL FLUX-ROPE ANALYTICAL MODEL FOR MAGNETIC CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Nieves-Chinchilla, T. [Catholic University of America, Washington, DC (United States); Linton, M. G. [Space Science Division, Naval Research Laboratory, Washington, DC (United States); Hidalgo, M. A. [Dept. de Fisica, UAH, Alcala de Henares, Madrid (Spain); Vourlidas, A. [The Johns Hopkins University Applied Physics Laboratory, Laurel, MD (United States); Savani, N. P.; Szabo, A. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Farrugia, C.; Yu, W., E-mail: Teresa.Nieves@nasa.gov [Space Science Center and Department of Physics, University of New Hampshire, Durham, NH (United States)

    2016-05-20

    We present an analytical model to describe magnetic flux-rope topologies. When these structures are observed embedded in Interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature, they are called Magnetic Clouds (MCs). Our model extends the circular-cylindrical concept of Hidalgo et al. by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of MC geometrical information and orientation. The generalized model provides flexibility for implementation in 3D MHD simulations. Here, we evaluate its performance in the reconstruction of MCs in in situ observations. Four Earth-directed ICME events, observed by the Wind spacecraft, are used to validate the technique. The events are selected from the ICME Wind list with the magnetic obstacle boundaries chosen consistently with the magnetic field and plasma in situ observations and with a new parameter (EPP, the Electron Pitch angle distribution Parameter) which quantifies the bidirectionally of the plasma electrons. The goodness of the fit is evaluated with a single correlation parameter to enable comparative analysis of the events. In general, at first glance, the model fits the selected events very well. However, a detailed analysis of events with signatures of significant compression indicates the need to explore geometries other than the circular-cylindrical. An extension of our current modeling framework to account for such non-circular CMEs will be presented in a forthcoming publication.

  20. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    Science.gov (United States)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  1. An Analytical Tire Model with Flexible Carcass for Combined Slips

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2014-01-01

    Full Text Available The tire mechanical characteristics under combined cornering and braking/driving situations have significant effects on vehicle directional controls. The objective of this paper is to present an analytical tire model with flexible carcass for combined slip situations, which can describe tire behavior well and can also be used for studying vehicle dynamics. The tire forces and moments come mainly from the shear stress and sliding friction at the tread-road interface. In order to describe complicated tire characteristics and tire-road friction, some key factors are considered in this model: arbitrary pressure distribution; translational, bending, and twisting compliance of the carcass; dynamic friction coefficient; anisotropic stiffness properties. The analytical tire model can describe tire forces and moments accurately under combined slip conditions. Some important properties induced by flexible carcass can also be reflected. The structural parameters of a tire can be identified from tire measurements and the computational results using the analytical model show good agreement with test data.

  2. Analytical validation of an ultraviolet-visible procedure for determining lutein concentration and application to lutein-loaded nanoparticles.

    Science.gov (United States)

    Silva, Jéssica Thaís do Prado; Silva, Anderson Clayton da; Geiss, Julia Maria Tonin; de Araújo, Pedro Henrique Hermes; Becker, Daniela; Bracht, Lívia; Leimann, Fernanda Vitória; Bona, Evandro; Guerra, Gustavo Petri; Gonçalves, Odinei Hess

    2017-09-01

    Lutein is a carotenoid presenting known anti-inflammatory and antioxidant properties. Lutein-rich diets have been associated with neurological improvement as well as reduction of the risk of vision loss due to Age-Related Macular Degeneration (AMD). Micro and nanoencapsulation have demonstrated to be effective techniques in protecting lutein against degradation and also in improving its bioavailability. However, actual lutein concentration inside the capsules and encapsulation efficiency are key parameters that must be precisely known when designing in vitro and in vivo tests. In this work an analytical procedure was validated for the determination of the actual lutein content in zein nanoparticles using ultraviolet-visible spectroscopy. Method validation followed the International Conference on Harmonisation (ICH) guidelines which evaluate linearity, detection limit, quantification limit, accuracy and precision. The validated methodology was applied to characterize lutein-loaded nanoparticles. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. On the Validation of a Numerical Model for the Analysis of Soil-Structure Interaction Problems

    Directory of Open Access Journals (Sweden)

    Jorge Luis Palomino Tamayo

    Full Text Available Abstract Modeling and simulation of mechanical response of structures, relies on the use of computational models. Therefore, verification and validation procedures are the primary means of assessing accuracy, confidence and credibility in modeling. This paper is concerned with the validation of a three dimensional numerical model based on the finite element method suitable for the dynamic analysis of soil-structure interaction problems. The soil mass, structure, structure's foundation and the appropriate boundary conditions can be represented altogether in a single model by using a direct approach. The theory of porous media of Biot is used to represent the soil mass as a two-phase material which is considered to be fully saturated with water; meanwhile other parts of the system are treated as one-phase materials. Plasticity of the soil mass is the main source of non-linearity in the problem and therefore an iterative-incremental algorithm based on the Newton-Raphson procedure is used to solve the nonlinear equilibrium equations. For discretization in time, the Generalized Newmark-β method is used. The soil is represented by a plasticity-based, effective-stress constitutive model suitable for liquefaction. Validation of the present numerical model is done by comparing analytical and centrifuge test results of soil and soil-pile systems with those results obtained with the present numerical model. A soil-pile-structure interaction problem is also presented in order to shown the potentiality of the numerical tool.

  4. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  5. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  6. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  7. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  8. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  9. Analytic expressions for the construction of a fire event PSA model

    International Nuclear Information System (INIS)

    Kang, Dae Il; Kim, Kil Yoo; Kim, Dong San; Hwang, Mee Jeong; Yang, Joon Eon

    2016-01-01

    In this study, the changing process of an internal event PSA model to a fire event PSA model is analytically presented and discussed. Many fire PSA models have fire induced initiating event fault trees not shown in an internal event PSA model. Fire-induced initiating fault tree models are developed for addressing multiple initiating event issues. A single fire event within a fire compartment or fire scenario can cause multiple initiating events. As an example, a fire in a turbine building area can cause a loss of the main feed-water and loss of off-site power initiating events. Up to now, there has been no analytic study on the construction of a fire event PSA model using an internal event PSA model with fault trees of initiating events. In this paper, the changing process of an internal event PSA model to a fire event PSA model was analytically presented and discussed. This study results show that additional cutsets can be obtained if the fault trees of initiating events for a fire event PSA model are not exactly developed.

  10. Analytic expressions for the construction of a fire event PSA model

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Kim, Kil Yoo; Kim, Dong San; Hwang, Mee Jeong; Yang, Joon Eon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, the changing process of an internal event PSA model to a fire event PSA model is analytically presented and discussed. Many fire PSA models have fire induced initiating event fault trees not shown in an internal event PSA model. Fire-induced initiating fault tree models are developed for addressing multiple initiating event issues. A single fire event within a fire compartment or fire scenario can cause multiple initiating events. As an example, a fire in a turbine building area can cause a loss of the main feed-water and loss of off-site power initiating events. Up to now, there has been no analytic study on the construction of a fire event PSA model using an internal event PSA model with fault trees of initiating events. In this paper, the changing process of an internal event PSA model to a fire event PSA model was analytically presented and discussed. This study results show that additional cutsets can be obtained if the fault trees of initiating events for a fire event PSA model are not exactly developed.

  11. Analytical solution of dispersion relations for the nuclear optical model

    Energy Technology Data Exchange (ETDEWEB)

    VanderKam, J.M. [Center for Communications Research, Thanet Road, Princeton, NJ 08540 (United States); Weisel, G.J. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States); Penn State Altoona, 3000 Ivyside Park, Altoona, PA 16601-3760 (United States); Tornow, W. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States)

    2000-12-01

    Analytical solutions of dispersion integral relations, linking the real and imaginary parts of the nuclear optical model, have been derived. These are displayed for some widely used forms of the volume- and surface-absorptive nuclear potentials. When the analytical solutions are incorporated into the optical-model search code GENOA, replacing a numerical integration, the code runs three and a half to seven times faster, greatly aiding the analysis of direct-reaction, elastic scattering data. (author)

  12. Construction of analytically solvable models for interacting species. [biological species competition

    Science.gov (United States)

    Rosen, G.

    1976-01-01

    The basic form of a model representation for systems of n interacting biological species is a set of essentially nonlinear autonomous ordinary differential equations. A generic canonical expression for the rate functions in the equations is reported which permits the analytical general solution to be obtained by elementary computation. It is shown that a general analytical solution is directly obtainable for models where the rate functions are prescribed by the generic canonical expression from the outset. Some illustrative examples are given which demonstrate that the generic canonical expression can be used to construct analytically solvable models for two interacting species with limit-cycle dynamics as well as for a three-species interdependence.

  13. Novel analytical model for the determination of grain size distributions in nanocrystalline materials with low lattice microstrains by X-ray diffractometry

    International Nuclear Information System (INIS)

    Sanchez-Bajo, F.; Ortiz, A.L.; Cumbrera, F.L.

    2006-01-01

    We have developed a novel, analytical model for the determination of grain size distributions in nanocrystalline (nc) materials with low internal stresses by X-ray diffractometry (XRD). The model assumes explicitly that the XRD peaks are pseudo-Voigtian and that the grain size distributions are lognormal, both of which are assumptions amply supported by the experimental evidence. It was found analytically that the grain size dispersion depends on the shape of the XRD peaks only, whereas the grain size median depends on both the shape and width of the XRD peaks. In addition, the theoretical predictions resulting from the model were validated using standard XRD peaks obtained by computer simulation from first principles. Particular emphasis is given to the discussion of the validity limits of the model, and to the analysis of the influence of the characteristics of the grain size distributions on the nature of the XRD peaks. We then show how to calculate the average and apparent grain sizes from the grain size distribution determined with the model, and how this compares with the Scherrer method. Implications for the characterization of (undistorted and distorted) nc-materials are indicated, and a case study of an nc-powder of cubic ZrO 2 is presented. The application of the model itself is simple, involving only the fit of a pseudo-Voigt function to a single XRD peak followed by the use of two equations. This suggests that the model may have an important role to play in the characterization of nc-materials

  14. Novel analytical model for the determination of grain size distributions in nanocrystalline materials with low lattice microstrains by X-ray diffractometry

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Departamento de Electronica e Ingenieria Electromecanica, Escuela de Ingenierias Industriales, Universidad de Extremadura, Avda. de Elvas S/N, Badajoz 06071 (Spain); Ortiz, A.L. [Departamento de Electronica e Ingenieria Electromecanica, Escuela de Ingenierias Industriales, Universidad de Extremadura, Avda. de Elvas S/N, Badajoz 06071 (Spain)]. E-mail: alortiz@unex.es; Cumbrera, F.L. [Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, Badajoz 06071 (Spain)

    2006-01-15

    We have developed a novel, analytical model for the determination of grain size distributions in nanocrystalline (nc) materials with low internal stresses by X-ray diffractometry (XRD). The model assumes explicitly that the XRD peaks are pseudo-Voigtian and that the grain size distributions are lognormal, both of which are assumptions amply supported by the experimental evidence. It was found analytically that the grain size dispersion depends on the shape of the XRD peaks only, whereas the grain size median depends on both the shape and width of the XRD peaks. In addition, the theoretical predictions resulting from the model were validated using standard XRD peaks obtained by computer simulation from first principles. Particular emphasis is given to the discussion of the validity limits of the model, and to the analysis of the influence of the characteristics of the grain size distributions on the nature of the XRD peaks. We then show how to calculate the average and apparent grain sizes from the grain size distribution determined with the model, and how this compares with the Scherrer method. Implications for the characterization of (undistorted and distorted) nc-materials are indicated, and a case study of an nc-powder of cubic ZrO{sub 2} is presented. The application of the model itself is simple, involving only the fit of a pseudo-Voigt function to a single XRD peak followed by the use of two equations. This suggests that the model may have an important role to play in the characterization of nc-materials.

  15. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    In this paper, a review is presented of the various methods which ... to make a direct and objective comparison of specific dynamic properties, measured ..... stiffness matrix is available from the analytical model, is that of reducing or condensing.

  16. Analytical Validation of a New Enzymatic and Automatable Method for d-Xylose Measurement in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Israel Sánchez-Moreno

    2017-01-01

    Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.

  17. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    Science.gov (United States)

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  18. SIMMER-III analytic thermophysical property model

    International Nuclear Information System (INIS)

    Morita, K; Tobita, Y.; Kondo, Sa.; Fischer, E.A.

    1999-05-01

    An analytic thermophysical property model using general function forms is developed for a reactor safety analysis code, SIMMER-III. The function forms are designed to represent correct behavior of properties of reactor-core materials over wide temperature ranges, especially for the thermal conductivity and the viscosity near the critical point. The most up-to-date and reliable sources for uranium dioxide, mixed-oxide fuel, stainless steel, and sodium available at present are used to determine parameters in the proposed functions. This model is also designed to be consistent with a SIMMER-III model on thermodynamic properties and equations of state for reactor-core materials. (author)

  19. An analytical model for the assessment of airline expansion strategies

    Directory of Open Access Journals (Sweden)

    Mauricio Emboaba Moreira

    2014-01-01

    Full Text Available Purpose: The purpose of this article is to develop an analytical model to assess airline expansion strategies by combining generic business strategy models with airline business models. Methodology and approach: A number of airline business models are examined, as are Porter’s (1983 industry five forces that drive competition, complemented by Nalebuff/ Brandenburger’s  (1996 sixth force, and the basic elements of the general environment in which the expansion process takes place.  A system of points and weights is developed to create a score among the 904,736 possible combinations considered. The model’s outputs are generic expansion strategies with quantitative assessments for each specific combination of elements inputted. Originality and value: The analytical model developed is original because it combines for the first time and explicitly elements of the general environment, industry environment, airline business models and the generic expansion strategy types. Besides it creates a system of scores that may be used to drive the decision process toward the choice of a specific strategic expansion path. Research implications: The analytical model may be adapted to other industries apart from the airline industry by substituting the element “airline business model” by other industries corresponding elements related to the different specific business models.

  20. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  1. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  3. Subject-enabled analytics model on measurement statistics in health risk expert system for public health informatics.

    Science.gov (United States)

    Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun

    2017-11-01

    This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A Unified Channel Charges Expression for Analytic MOSFET Modeling

    Directory of Open Access Journals (Sweden)

    Hugues Murray

    2012-01-01

    Full Text Available Based on a 1D Poissons equation resolution, we present an analytic model of inversion charges allowing calculation of the drain current and transconductance in the Metal Oxide Semiconductor Field Effect Transistor. The drain current and transconductance are described by analytical functions including mobility corrections and short channel effects (CLM, DIBL. The comparison with the Pao-Sah integral shows excellent accuracy of the model in all inversion modes from strong to weak inversion in submicronics MOSFET. All calculations are encoded with a simple C program and give instantaneous results that provide an efficient tool for microelectronics users.

  5. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    Science.gov (United States)

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  6. An analytical model of a curved beam with a T shaped cross section

    Science.gov (United States)

    Hull, Andrew J.; Perez, Daniel; Cox, Donald L.

    2018-03-01

    This paper derives a comprehensive analytical dynamic model of a closed circular beam that has a T shaped cross section. The new model includes in-plane and out-of-plane vibrations derived using continuous media expressions which produces results that have a valid frequency range above those available from traditional lumped parameter models. The web is modeled using two-dimensional elasticity equations for in-plane motion and the classical flexural plate equation for out-of-plane motion. The flange is modeled using two sets of Donnell shell equations: one for the left side of the flange and one for the right side of the flange. The governing differential equations are solved with unknown wave propagation coefficients multiplied by spatial domain and time domain functions which are inserted into equilibrium and continuity equations at the intersection of the web and flange and into boundary conditions at the edges of the system resulting in 24 algebraic equations. These equations are solved to yield the wave propagation coefficients and this produces a solution to the displacement field in all three dimensions. An example problem is formulated and compared to results from finite element analysis.

  7. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  8. Analytical model spectrum for electrostatic turbulence in tokamaks

    International Nuclear Information System (INIS)

    Fiedler-Ferrari, N.; Misguich, J.H.

    1990-04-01

    In this work we present an analytical model spectrum, for three-dimensional electrostatic turbulence (homogeneous, stationary and locally isotropic in the plane perpendicular to the magnetic field), constructed by using experimental results from TFR and TEXT Tokamaks, and satisfying basic symmetry and parity conditions. The proposed spectrum seems to be tractable for explicit analytical calculations of transport processes, and consistent with experimental data. Additional experimental measurements in the bulk plasma remain however necessary in order to determine some unknown spectral properties of parallel propagation

  9. Development of a child head analytical dynamic model considering cranial nonuniform thickness and curvature - Applying to children aged 0-1 years old.

    Science.gov (United States)

    Li, Zhigang; Ji, Cheng; Wang, Lishu

    2018-07-01

    Although analytical models have been used to quickly predict head response under impact condition, the existing models generally took the head as regular shell with uniform thickness which cannot account for the actual head geometry with varied cranial thickness and curvature at different locations. The objective of this study is to develop and validate an analytical model incorporating actual cranial thickness and curvature for child aged 0-1YO and investigate their effects on child head dynamic responses at different head locations. To develop the new analytical model, the child head was simplified into an irregular fluid-filled shell with non-uniform thickness and the cranial thickness and curvature at different locations were automatically obtained from CT scans using a procedure developed in this study. The implicit equation of maximum impact force was derived as a function of elastic modulus, thickness and radius of curvature of cranium. The proposed analytical model are compared with cadaver test data of children aged 0-1 years old and it is shown to be accurate in predicting head injury metrics. According to this model, obvious difference in injury metrics were observed among subjects with the same age, but different cranial thickness and curvature; and the injury metrics at forehead location are significant higher than those at other locations due to large thickness it owns. The proposed model shows good biofidelity and can be used in quickly predicting the dynamics response at any location of head for child younger than 1 YO. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  11. Analytical model of surface potential profiles and transfer characteristics for hetero stacked tunnel field-effect transistors

    Science.gov (United States)

    Xu, Hui Fang; Sun, Wen; Han, Xin Feng

    2018-06-01

    An analytical model of surface potential profiles and transfer characteristics for hetero stacked tunnel field-effect transistors (HS-TFETs) is presented for the first time, where hetero stacked materials are composed of two different bandgaps. The bandgap of the underlying layer is smaller than that of the upper layer. Under different device parameters (upper layer thickness, underlying layer thickness, and hetero stacked materials) and temperature, the validity of the model is demonstrated by the agreement of its results with the simulation results. Moreover, the results show that the HS-TFETs can obtain predominant performance with relatively slow changes of subthreshold swing (SS) over a wide drain current range, steep average subthreshold swing, high on-state current, and large on–off state current ratio.

  12. Sharing the Data along with the Responsibility: Examining an Analytic Scale-Based Model for Assessing School Climate.

    Science.gov (United States)

    Shindler, John; Taylor, Clint; Cadenas, Herminia; Jones, Albert

    This study was a pilot effort to examine the efficacy of an analytic trait scale school climate assessment instrument and democratic change system in two urban high schools. Pilot study results indicate that the instrument shows promising soundness in that it exhibited high levels of validity and reliability. In addition, the analytic trait format…

  13. Building analytical three-field cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.R.L. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Moraes, P.H.R.S. [ITA-Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP (Brazil); Ferreira, D.A. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, PB (Brazil); Neta, D.C.V. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Estadual da Paraiba, Departamento de Fisica, Campina Grande, PB (Brazil)

    2018-02-15

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called ''extension method''. The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters. (orig.)

  14. Analytical solutions of the electrostatically actuated curled beam problem

    KAUST Repository

    Younis, Mohammad I.

    2014-07-24

    This works presents analytical expressions of the electrostatically actuated initially deformed cantilever beam problem. The formulation is based on the continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximation. In such cases, multi-mode reduced order models are shown to yield accurate results. © 2014 Springer-Verlag Berlin Heidelberg.

  15. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  16. MODEL ANALYTICAL NETWORK PROCESS (ANP DALAM PENGEMBANGAN PARIWISATA DI JEMBER

    Directory of Open Access Journals (Sweden)

    Sukidin Sukidin

    2015-04-01

    Full Text Available Abstrak    : Model Analytical Network Process (ANP dalam Pengembangan Pariwisata di Jember. Penelitian ini mengkaji kebijakan pengembangan pariwisata di Jember, terutama kebijakan pengembangan agrowisata perkebunan kopi dengan menggunakan Jember Fashion Carnival (JFC sebagai event marketing. Metode yang digunakan adalah soft system methodology dengan menggunakan metode analitis jaringan (Analytical Network Process. Penelitian ini menemukan bahwa pengembangan pariwisata di Jember masih dilakukan dengan menggunakan pendekatan konvensional, belum terkoordinasi dengan baik, dan lebih mengandalkan satu even (atraksi pariwisata, yakni JFC, sebagai lokomotif daya tarik pariwisata Jember. Model pengembangan konvensional ini perlu dirancang kembali untuk memperoleh pariwisata Jember yang berkesinambungan. Kata kunci: pergeseran paradigma, industry pariwisata, even pariwisata, agrowisata Abstract: Analytical Network Process (ANP Model in the Tourism Development in Jember. The purpose of this study is to conduct a review of the policy of tourism development in Jember, especially development policies for coffee plantation agro-tourism by using Jember Fashion Carnival (JFC as event marketing. The research method used is soft system methodology using Analytical Network Process. The result shows that the tourism development in Jember is done using a conventional approach, lack of coordination, and merely focus on a single event tourism, i.e. the JFC, as locomotive tourism attraction in Jember. This conventional development model needs to be redesigned to reach Jember sustainable tourism development. Keywords: paradigm shift, tourism industry, agro-tourism

  17. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  18. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Massoud, M

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients.

  19. Parameters estimation of the single and double diode photovoltaic models using a Gauss–Seidel algorithm and analytical method: A comparative study

    International Nuclear Information System (INIS)

    Et-torabi, K.; Nassar-eddine, I.; Obbadi, A.; Errami, Y.; Rmaily, R.; Sahnoun, S.; El fajri, A.; Agunaou, M.

    2017-01-01

    Highlights: • Comparative study of two methods: a Gauss Seidel method and an analytical method. • Five models are implemented to estimate the five parameters for single diode. • Two models are used to estimate the seven parameters for double diode. • The parameters are estimated under changing environmental conditions. • To choose method/model combination more adequate for each PV module technology. - Abstract: In the photovoltaic (PV) panels modeling field, this paper presents a comparative study of two parameter estimation methods: the iterative method called Gauss Seidel, applied on the single diode model, and the analytical method used on the double diode model. These parameter estimation methods are based on the manufacturer's datasheets. They are also tested on three PV modules of different technologies: multicrystalline (kyocera KC200GT), monocrystalline (Shell SQ80), and thin film (Shell ST40). For the iterative method, five existing mathematical models classified from 1 to 5 are used to estimate the parameters of these PV modules under varying environmental conditions. Only two models of them are used for the analytical method. Each model is based on the combination of the photocurrent and the reverse saturation current’s expressions in terms of temperature and irradiance. In addition, the results of the models’ simulation are compared with the experimental data obtained from the PV modules’ datasheets, in order to evaluate the accuracy of the models. The simulation shows that the I-V characteristics obtained are matching to the experimental data. In order to validate the reliability of the two methods, both the Absolute Error (AE) and the Root Mean Square Error (RMSE) were calculated. The results suggest that the analytical method can be very useful for monocrystalline and multicrystalline modules, but for the thin film module, the iterative method is the most suitable.

  20. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  1. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  2. Analytical model for nonlinear piezoelectric energy harvesting devices

    International Nuclear Information System (INIS)

    Neiss, S; Goldschmidtboeing, F; M Kroener; Woias, P

    2014-01-01

    In this work we propose analytical expressions for the jump-up and jump-down point of a nonlinear piezoelectric energy harvester. In addition, analytical expressions for the maximum power output at optimal resistive load and the 3 dB-bandwidth are derived. So far, only numerical models have been used to describe the physics of a piezoelectric energy harvester. However, this approach is not suitable to quickly evaluate different geometrical designs or piezoelectric materials in the harvester design process. In addition, the analytical expressions could be used to predict the jump-frequencies of a harvester during operation. In combination with a tuning mechanism, this would allow the design of an efficient control algorithm to ensure that the harvester is always working on the oscillator's high energy attractor. (paper)

  3. Model-based Engineering for the Integration of Manufacturing Systems with Advanced Analytics

    OpenAIRE

    Lechevalier , David; Narayanan , Anantha; Rachuri , Sudarsan; Foufou , Sebti; Lee , Y Tina

    2016-01-01

    Part 3: Interoperability and Systems Integration; International audience; To employ data analytics effectively and efficiently on manufacturing systems, engineers and data scientists need to collaborate closely to bring their domain knowledge together. In this paper, we introduce a domain-specific modeling approach to integrate a manufacturing system model with advanced analytics, in particular neural networks, to model predictions. Our approach combines a set of meta-models and transformatio...

  4. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  5. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  6. Analytical and finite element modeling of grounding systems

    Energy Technology Data Exchange (ETDEWEB)

    Luz, Mauricio Valencia Ferreira da [University of Santa Catarina (UFSC), Florianopolis, SC (Brazil)], E-mail: mauricio@grucad.ufsc.br; Dular, Patrick [University of Liege (Belgium). Institut Montefiore], E-mail: Patrick.Dular@ulg.ac.be

    2007-07-01

    Grounding is the art of making an electrical connection to the earth. This paper deals with the analytical and finite element modeling of grounding systems. An electrokinetic formulation using a scalar potential can benefit from floating potentials to define global quantities such as electric voltages and currents. The application concerns a single vertical grounding with one, two and three-layer soil, where the superior extremity stays in the surface of the soil. This problem has been modeled using a 2D axi-symmetric electrokinetic formulation. The grounding resistance obtained by finite element method is compared with the analytical one for one-layer soil. With the results of this paper it is possible to show that finite element method is a powerful tool in the analysis of the grounding systems in low frequencies. (author)

  7. A Performance Analytical Strategy for Network-on-Chip Router with Input Buffer Architecture

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2012-11-01

    Full Text Available In this paper, a performance analytical strategy is proposed for Network-on-Chip router with input buffer architecture. First, an analytical model is developed based on semi-Markov process. For the non-work-conserving router with small buffer size, the model can be used to analyze the schedule delay and the average service time for each buffer when given the related parameters. Then, the packet average delay in router is calculated by using the model. Finally, we validate the effectiveness of our strategy by simulation. By comparing our analytical results to simulation results, we show that our strategy successfully captures the Network-on-Chip router performance and it performs better than the state-of-art technology. Therefore, our strategy can be used as an efficiency performance analytical tool for Network-on-Chip design.

  8. Impact of Cross-Axis Structural Dynamics on Validation of Linear Models for Space Launch System

    Science.gov (United States)

    Pei, Jing; Derry, Stephen D.; Zhou Zhiqiang; Newsom, Jerry R.

    2014-01-01

    A feasibility study was performed to examine the advisability of incorporating a set of Programmed Test Inputs (PTIs) during the Space Launch System (SLS) vehicle flight. The intent of these inputs is to provide validation to the preflight models for control system stability margins, aerodynamics, and structural dynamics. During October 2009, Ares I-X program was successful in carrying out a series of PTI maneuvers which provided a significant amount of valuable data for post-flight analysis. The resulting data comparisons showed excellent agreement with the preflight linear models across the frequency spectrum of interest. However unlike Ares I-X, the structural dynamics associated with the SLS boost phase configuration are far more complex and highly coupled in all three axes. This presents a challenge when implementing this similar system identification technique to SLS. Preliminary simulation results show noticeable mismatches between PTI validation and analytical linear models in the frequency range of the structural dynamics. An alternate approach was examined which demonstrates the potential for better overall characterization of the system frequency response as well as robustness of the control design.

  9. Development and validation of a CFD model predicting the backfill process of a nuclear waste gallery

    International Nuclear Information System (INIS)

    Gopala, Vinay Ramohalli; Lycklama a Nijeholt, Jan-Aiso; Bakker, Paul; Haverkate, Benno

    2011-01-01

    Research highlights: → This work presents the CFD simulation of the backfill process of Supercontainers with nuclear waste emplaced in a disposal gallery. → The cement-based material used for backfill is grout and the flow of grout is modelled as a Bingham fluid. → The model is verified against an analytical solution and validated against the flowability tests for concrete. → Comparison between backfill plexiglas experiment and simulation shows a distinct difference in the filling pattern. → The numerical model needs to be further developed to include segregation effects and thixotropic behavior of grout. - Abstract: Nuclear waste material may be stored in underground tunnels for long term storage. The example treated in this article is based on the current Belgian disposal concept for High-Level Waste (HLW), in which the nuclear waste material is packed in concrete shielded packages, called Supercontainers, which are inserted into these tunnels. After placement of the packages in the underground tunnels, the remaining voids between the packages and the tunnel lining is filled-up with a cement-based material called grout in order to encase the stored containers into the underground spacing. This encasement of the stored containers inside the tunnels is known as the backfill process. A good backfill process is necessary to stabilize the waste gallery against ground settlements. A numerical model to simulate the backfill process can help to improve and optimize the process by ensuring a homogeneous filling with no air voids and also optimization of the injection positions to achieve a homogeneous filling. The objective of the present work is to develop such a numerical code that can predict the backfill process well and validate the model against the available experiments and analytical solutions. In the present work the rheology of Grout is modelled as a Bingham fluid which is implemented in OpenFOAM - a finite volume-based open source computational fluid

  10. Development and validation of a multi-analyte method for the regulatory control of carotenoids used as feed additives in fish and poultry feed.

    Science.gov (United States)

    Vincent, Ursula; Serano, Federica; von Holst, Christoph

    2017-08-01

    Carotenoids are used in animal nutrition mainly as sensory additives that favourably affect the colour of fish, birds and food of animal origin. Various analytical methods exist for their quantification in compound feed, reflecting the different physico-chemical characteristics of the carotenoid and the corresponding feed additives. They may be natural products or specific formulations containing the target carotenoids produced by chemical synthesis. In this study a multi-analyte method was developed that can be applied to the determination of all 10 carotenoids currently authorised within the European Union for compound feedingstuffs. The method functions regardless of whether the carotenoids have been added to the compound feed via natural products or specific formulations. It is comprised of three steps: (1) digestion of the feed sample with an enzyme; (2) pressurised liquid extraction; and (3) quantification of the analytes by reversed-phase HPLC coupled to a photodiode array detector in the visible range. The method was single-laboratory validated for poultry and fish feed covering a mass fraction range of the target analyte from 2.5 to 300 mg kg - 1 . The following method performance characteristics were obtained: the recovery rate varied from 82% to 129% and precision expressed as the relative standard deviation of intermediate precision varied from 1.6% to 15%. Based on the acceptable performance obtained in the validation study, the multi-analyte method is considered fit for the intended purpose.

  11. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  12. Validation of analytical method to quality control and the stability study of 0.025 % eyedrops Ketotiphen

    International Nuclear Information System (INIS)

    Troche Concepcion, Yenilen; Romero Diaz, Jacqueline Aylema; Garcia Penna, Caridad M

    2010-01-01

    The Ketotiphen eyedrop is prescribed to relief the signs and symptoms of allergic conjunctivitis due to its potent H 1a ntihistaminic effect showing some ability to inhibit the histamine release and other mediators in cases of mastocytosis. The aim of present paper was to develop and validate an analytical method for the high-performance liquid chromatography, to quality control and the stability studies of 0.025 % eyedrop Ketotiphen. Method was based on active principle separation by means of a Lichrosorb RP-18 (5 μm) (250 x 4 mm), with UV detection to 296 nm using a mobile phase including a non-gasified mixture of methanol:buffer-phosphate (75:25; pH 8.5) adding 1 mL of Isopropanol by each 1 000 mL of the previous mixture at a 1.2 mL/min flow velocity. The analytical method was linear, accurate, specific and exact during the study concentrations

  13. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  14. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  15. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  16. Lessons learned in the verification, validation and application of a coupled heat and fluid flow code

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1986-01-01

    A summary is given of the authors recent studies in the verification, validation and application of a coupled heat and fluid flow code. Verification has been done against eight analytic and semi-analytic solutions. These solutions include those involving thermal buoyancy flow and fracture flow. Comprehensive field validation studies over a period of four years are discussed. The studies are divided into three stages: (1) history matching, (2) double-blind prediction and confirmation, (3) design optimization. At each stage, parameter sensitivity studies are performed. To study the applications of mathematical models, a problem proposed by the International Energy Agency (IEA) is solved using this verified and validated numerical model as well as two simpler models. One of the simpler models is a semi-analytic method assuming the uncoupling of the heat and fluid flow processes. The other is a graphical method based on a large number of approximations. Variations are added to the basic IEA problem to point out the limits of ranges of applications of each model. A number of lessons are learned from the above investigations. These are listed and discussed

  17. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Regional technical cooperation model project, IAEA - RER/2/2004 ''quality control and quality assurance for nuclear analytical techniques'

    International Nuclear Information System (INIS)

    Arikan, P.

    2002-01-01

    An analytical laboratory should produce high quality analytical data through the use of analytical measurements that is accurate, reliable and adequate for the intended purpose. This objective can be accomplished in a cost-effective manner under a planned and documented quality system of activities. It is well-known that serious deficiencies can occur in laboratory operations when insufficient attention is given to the quality of the work. It requires not only a thorough knowledge of the laboratory's purpose and operation, but also the dedication of the management and operating staff to standards of excellence. Laboratories employing nuclear and nuclear-related analytical techniques are sometimes confronted with performance problems which prevent them from becoming accepted and respected by clients, such as industry, government and regulatory bodies, and from being eligible for contracts. The International Standard ISO 17025 has been produced as the result of extensive experience in the implementation of ISO/IEC Guide 25:1990 and EN 45001:1989, which replaces both of them now. It contains all of the requirements that testing and calibration laboratories must meet if they wish to demonstrate that they operate a quality system that is technically competent, and are able to generate technically valid results. The use of ISO 17025 should facilitate cooperation between laboratories and other bodies to assist in the exchange of information and experience, and in the harmonization of standards and procedures. IAEA model project RER/2/004 entitled 'Quality Assurance/Quality Control in Nuclear Analytical Techniques' was initiated in 1999 as a Regional TC project in East European countries to assist Member State laboratories in the region to install a complete quality system according to the ISO/IEC 17025 standard. 12 laboratories from 11 countries plus the Agency's Laboratories in Seibersdorf have been selected as participants to undergo exercises and training with the

  19. Analytical model of the optical vortex microscope.

    Science.gov (United States)

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  20. An analytical model of flagellate hydrodynamics

    DEFF Research Database (Denmark)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders Peter

    2017-01-01

    solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left......–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming......Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical...

  1. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    Science.gov (United States)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  2. Heat transfer analytical models for the rapid determination of cooling time in crystalline thermoplastic injection molding and experimental validation

    Science.gov (United States)

    Didier, Delaunay; Baptiste, Pignon; Nicolas, Boyard; Vincent, Sobotka

    2018-05-01

    Heat transfer during the cooling of a thermoplastic injected part directly affects the solidification of the polymer and consequently the quality of the part in term of mechanical properties, geometric tolerance and surface aspect. This paper proposes to mold designers a methodology based on analytical models to provide quickly the time to reach the ejection temperature depending of the temperature and the position of cooling channels. The obtained cooling time is the first step of the thermal conception of the mold. The presented methodology is dedicated to the determination of solidification time of a semi-crystalline polymer slab. It allows the calculation of the crystallization time of the part and is based on the analytical solution of the Stefan problem in a semi-infinite medium. The crystallization is then considered as a phase change with an effective crystallization temperature, which is obtained from Fast Scanning Calorimetry (FSC) results. The crystallization time is then corrected to take the finite thickness of the part into account. To check the accuracy of such approach, the solidification time is calculated by solving the heat conduction equation coupled to the crystallization kinetics of the polymer. The impact of the nature of the contact between the polymer and the mold is evaluated. The thermal contact resistance (TCR) appears as significant parameter that needs to be taken into account in the cooling time calculation. The results of the simplified model including or not TCR are compared in the case of a polypropylene (PP) with experiments carried out with an instrumented mold. Then, the methodology is applied for a part made with PolyEtherEtherKetone (PEEK).

  3. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  4. Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method

    Directory of Open Access Journals (Sweden)

    De-Gang Wang

    2012-01-01

    Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.

  5. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  6. Analytic solution of the Starobinsky model for inflation

    Energy Technology Data Exchange (ETDEWEB)

    Paliathanasis, Andronikos [Universidad Austral de Chile, Instituto de Ciencias Fisicas y Matematicas, Valdivia (Chile); Durban University of Technology, Institute of Systems Science, Durban (South Africa)

    2017-07-15

    We prove that the field equations of the Starobinsky model for inflation in a Friedmann-Lemaitre-Robertson-Walker metric constitute an integrable system. The analytical solution in terms of a Painleve series for the Starobinsky model is presented for the case of zero and nonzero spatial curvature. In both cases the leading-order term describes the radiation era provided by the corresponding higher-order theory. (orig.)

  7. Experimental evaluation of analytical penumbra calculation model for wobbled beams

    International Nuclear Information System (INIS)

    Kohno, Ryosuke; Kanematsu, Nobuyuki; Yusa, Ken; Kanai, Tatsuaki

    2004-01-01

    The goal of radiotherapy is not only to apply a high radiation dose to a tumor, but also to avoid side effects in the surrounding healthy tissue. Therefore, it is important for carbon-ion treatment planning to calculate accurately the effects of the lateral penumbra. In this article, for wobbled beams under various irradiation conditions, we focus on the lateral penumbras at several aperture positions of one side leaf of the multileaf collimator. The penumbras predicted by an analytical penumbra calculation model were compared with the measured results. The results calculated by the model for various conditions agreed well with the experimental ones. In conclusion, we found that the analytical penumbra calculation model could predict accurately the measured results for wobbled beams and it was useful for carbon-ion treatment planning to apply the model

  8. Evaluating Modeling Sessions Using the Analytic Hierarchy Process

    NARCIS (Netherlands)

    Ssebuggwawo, D.; Hoppenbrouwers, S.J.B.A.; Proper, H.A.; Persson, A.; Stirna, J.

    2008-01-01

    In this paper, which is methodological in nature, we propose to use an established method from the field of Operations Research, the Analytic Hierarchy Process (AHP), in the integrated, stakeholder- oriented evaluation of enterprise modeling sessions: their language, pro- cess, tool (medium), and

  9. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  10. Collaborative data analytics for smart buildings: opportunities and models

    DEFF Research Database (Denmark)

    Lazarova-Molnar, Sanja; Mohamed, Nader

    2018-01-01

    of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis...... of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.......Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate...

  11. Analytic model of heat deposition in spallation neutron target

    International Nuclear Information System (INIS)

    Findlay, D.J.S.

    2015-01-01

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  12. Analytic model of heat deposition in spallation neutron target

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, D.J.S.

    2015-12-11

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  13. Assessment of Analytic Water hammer Pressure Model of FAI/08-70

    International Nuclear Information System (INIS)

    Park, Ju Yeop; Yoo, Seung Hun; Seul, Kwang-Won

    2016-01-01

    In evaluating water hammer effect on the safety related systems, methods developed by the US utility are likely to be adopted in Korea. For example, the US utility developed specific methods to evaluate pressure and loading transient on piping due to water hammer as in FAI/08-70. The methods of FAI/08-70 would be applied in Korea when any regulatory request on the evaluation of water hammer effect due to the non-condensable gas accumulation in the safety related systems. Specifically, FAI/08-70 gives an analytic model which can be used to analyze the maximum transient pressure and maximum transient loading on the piping of the safety-related systems due to the non-condensable induced water hammer effect. Therefore, it seems to be meaningful to review the FAI/08-70 methods and attempt to apply the methods to a specific case to see if they really give reasonable estimate before the application of FAI/08-70 methods to domestic nuclear power plants. In the present study, analytic water hammer pressure model of FAI/08-70 is reviewed in detail and the model is applied to the specific experiment of FAI/08-70 to see if the analytic water hammer pressure model really gives reasonable estimate of the peak water hammer pressure. Specifically, we assess the experiment 52A of FAI/08-70 which adopts flushed initial condition with a short rising piping length and a high level piping length of 51inch. The calculated analytic water hammer pressure peak shows a close agreement with the measured experimental data of 52A. Unfortunately, however, the theoretical value is a little bit less than that of the experimental value. This implies the analytic model of FAI/08-70 is not conservative

  14. Assessment of Analytic Water hammer Pressure Model of FAI/08-70

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ju Yeop; Yoo, Seung Hun; Seul, Kwang-Won [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In evaluating water hammer effect on the safety related systems, methods developed by the US utility are likely to be adopted in Korea. For example, the US utility developed specific methods to evaluate pressure and loading transient on piping due to water hammer as in FAI/08-70. The methods of FAI/08-70 would be applied in Korea when any regulatory request on the evaluation of water hammer effect due to the non-condensable gas accumulation in the safety related systems. Specifically, FAI/08-70 gives an analytic model which can be used to analyze the maximum transient pressure and maximum transient loading on the piping of the safety-related systems due to the non-condensable induced water hammer effect. Therefore, it seems to be meaningful to review the FAI/08-70 methods and attempt to apply the methods to a specific case to see if they really give reasonable estimate before the application of FAI/08-70 methods to domestic nuclear power plants. In the present study, analytic water hammer pressure model of FAI/08-70 is reviewed in detail and the model is applied to the specific experiment of FAI/08-70 to see if the analytic water hammer pressure model really gives reasonable estimate of the peak water hammer pressure. Specifically, we assess the experiment 52A of FAI/08-70 which adopts flushed initial condition with a short rising piping length and a high level piping length of 51inch. The calculated analytic water hammer pressure peak shows a close agreement with the measured experimental data of 52A. Unfortunately, however, the theoretical value is a little bit less than that of the experimental value. This implies the analytic model of FAI/08-70 is not conservative.

  15. Maritime Analytics Prototype: Phase 3 Validation

    Science.gov (United States)

    2014-01-01

    different so we need a flexible analysis set hierarchy encoded as directories or groups – like a recipe [C.3.1.4n] Improve the GUI:  Provide more...Problems zooming and panning on the timeline [C.1.2.1c, C.1.2.4e, C.1.3.1c, C.1.1.4c, C.1.1.4b]  Selected the wrong year and then the vessel...Scholtz_VAMetrics_2006.pdf] [21] J. Thomas, and K. Cook , Illuminating the Path, the Research and Development Agenda for Visual analytics: IEEE, 2005. [22

  16. Validation of the kinetic model for predicting the composition of chlorinated water discharged from power plant cooling systems

    International Nuclear Information System (INIS)

    Lietzke, M.H.

    1977-01-01

    The purpose of this report is to present a validation of a previously described kinetic model which was developed to predict the composition of chlorinated fresh water discharged from power plant cooling systems. The model was programmed in two versions: as a stand-alone program and as a part of a unified transport model developed from consistent mathematical models to simulate the dispersion of heated water and radioisotopic and chemical effluents from power plant discharges. The results of testing the model using analytical data taken during operation of the once-through cooling system of the Quad Cities Nuclear Station are described. Calculations are also presented on the Three Mile Island Nuclear Station which uses cooling towers

  17. Validation of an analytical method for determining halothane in urine as an instrument for evaluating occupational exposure

    International Nuclear Information System (INIS)

    Gonzalez Chamorro, Rita Maria; Jaime Novas, Arelis; Diaz Padron, Heliodora

    2010-01-01

    The occupational exposure to harmful substances may impose the apparition of determined significative changes in the normal physiology of the organism when the adequate security measures are not taken in time in a working place where the risk may be present. Among the chemical risks that may affect the workers' health are the inhalable anesthetic agents. With the objective to take the first steps for the introduction of an epidemiological surveillance system to this personnel, an analytical method for determining this anesthetic in urine was validated with the instrumental conditions created in our laboratory. To carry out this validation the following parameters were taken into account: specificity, lineament, precision, accuracy, detection limit and quantification limit, and the uncertainty of the method was calculated. In the validation procedure it was found that the technique is specific and precise, the detection limit was of 0,118 μg/L, and of the quantification limit of 0,354 μg/L. The global uncertainty was of 0,243, and the expanded of 0,486. The validated method, together with the posterior introduction of the biological exposure limits, will serve as an auxiliary means of diagnosis which will allow us a periodical control of the personnel exposure

  18. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  19. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    Science.gov (United States)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  20. Analytical validation of the PAM50-based Prosigna Breast Cancer Prognostic Gene Signature Assay and nCounter Analysis System using formalin-fixed paraffin-embedded breast tumor specimens

    International Nuclear Information System (INIS)

    Nielsen, Torsten; Storhoff, James; Wallden, Brett; Schaper, Carl; Ferree, Sean; Liu, Shuzhen; Gao, Dongxia; Barry, Garrett; Dowidar, Naeem; Maysuria, Malini

    2014-01-01

    NanoString’s Prosigna™ Breast Cancer Prognostic Gene Signature Assay is based on the PAM50 gene expression signature. The test outputs a risk of recurrence (ROR) score, risk category, and intrinsic subtype (Luminal A/B, HER2-enriched, Basal-like). The studies described here were designed to validate the analytical performance of the test on the nCounter Analysis System across multiple laboratories. Analytical precision was measured by testing five breast tumor RNA samples across 3 sites. Reproducibility was measured by testing replicate tissue sections from 43 FFPE breast tumor blocks across 3 sites following independent pathology review at each site. The RNA input range was validated by comparing assay results at the extremes of the specified range to the nominal RNA input level. Interference was evaluated by including non-tumor tissue into the test. The measured standard deviation (SD) was less than 1 ROR unit within the analytical precision study and the measured total SD was 2.9 ROR units within the reproducibility study. The ROR scores for RNA inputs at the extremes of the range were the same as those at the nominal input level. Assay results were stable in the presence of moderate amounts of surrounding non-tumor tissue (<70% by area). The analytical performance of NanoString’s Prosigna assay has been validated using FFPE breast tumor specimens across multiple clinical testing laboratories

  1. Toward an in-situ analytics and diagnostics framework for earth system models

    Science.gov (United States)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    , atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  2. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  3. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  4. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  5. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    Science.gov (United States)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  6. Validation of an analytical method for the determination of spiramycin, virginiamycin and tylosin in feeding-stuffs bij thin-layer chromatography and bio-autography

    NARCIS (Netherlands)

    Vincent, U.; Gizzi, G.; Holst, von C.; Jong, de J.; Michard, J.

    2007-01-01

    An inter-laboratory validation was carried out to determine the performance characteristics of an analytical method based on thin-layer chromatography (TLC) coupled to microbiological detection (bio-autography) for screening feed samples for the presence of spiramycin, tylosin and virginiamycin.

  7. Calculations for Adjusting Endogenous Biomarker Levels During Analytical Recovery Assessments for Ligand-Binding Assay Bioanalytical Method Validation.

    Science.gov (United States)

    Marcelletti, John F; Evans, Cindy L; Saxena, Manju; Lopez, Adriana E

    2015-07-01

    It is often necessary to adjust for detectable endogenous biomarker levels in spiked validation samples (VS) and in selectivity determinations during bioanalytical method validation for ligand-binding assays (LBA) with a matrix like normal human serum (NHS). Described herein are case studies of biomarker analyses using multiplex LBA which highlight the challenges associated with such adjustments when calculating percent analytical recovery (%AR). The LBA test methods were the Meso Scale Discovery V-PLEX® proinflammatory and cytokine panels with NHS as test matrix. The NHS matrix blank exhibited varied endogenous content of the 20 individual cytokines before spiking, ranging from undetectable to readily quantifiable. Addition and subtraction methods for adjusting endogenous cytokine levels in %AR calculations are both used in the bioanalytical field. The two methods were compared in %AR calculations following spiking and analysis of VS for cytokines having detectable endogenous levels in NHS. Calculations for %AR obtained by subtracting quantifiable endogenous biomarker concentrations from the respective total analytical VS values yielded reproducible and credible conclusions. The addition method, in contrast, yielded %AR conclusions that were frequently unreliable and discordant with values obtained with the subtraction adjustment method. It is shown that subtraction of assay signal attributable to matrix is a feasible alternative when endogenous biomarkers levels are below the limit of quantitation, but above the limit of detection. These analyses confirm that the subtraction method is preferable over that using addition to adjust for detectable endogenous biomarker levels when calculating %AR for biomarker LBA.

  8. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  9. An analytical model for interactive failures

    International Nuclear Information System (INIS)

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  10. An Analytic Approach to Developing Transport Threshold Models of Neoclassical Tearing Modes in Tokamaks

    International Nuclear Information System (INIS)

    Mikhailovskii, A.B.; Shirokov, M.S.; Konovalov, S.V.; Tsypin, V.S.

    2005-01-01

    Transport threshold models of neoclassical tearing modes in tokamaks are investigated analytically. An analysis is made of the competition between strong transverse heat transport, on the one hand, and longitudinal heat transport, longitudinal heat convection, longitudinal inertial transport, and rotational transport, on the other hand, which leads to the establishment of the perturbed temperature profile in magnetic islands. It is shown that, in all these cases, the temperature profile can be found analytically by using rigorous solutions to the heat conduction equation in the near and far regions of a chain of magnetic islands and then by matching these solutions. Analytic expressions for the temperature profile are used to calculate the contribution of the bootstrap current to the generalized Rutherford equation for the island width evolution with the aim of constructing particular transport threshold models of neoclassical tearing modes. Four transport threshold models, differing in the underlying competing mechanisms, are analyzed: collisional, convective, inertial, and rotational models. The collisional model constructed analytically is shown to coincide exactly with that calculated numerically; the reason is that the analytical temperature profile turns out to be the same as the numerical profile. The results obtained can be useful in developing the next generation of general threshold models. The first steps toward such models have already been made

  11. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  12. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  13. An analytical excitation model for an ionizing plasma

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.; Sijde, van der B.; Schram, D.C.

    1983-01-01

    From an analytical model for the population of high-lying excited levels in ionizing plasmas it appears that the distribution is a superposition of the equilibrium (Saha) value and an overpopulation. This overpopulation takes the form of a Maxwell distribution for free electrons. Experiments for He

  14. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.

    Science.gov (United States)

    Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher

    2017-08-01

    The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

  15. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    Science.gov (United States)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  16. Analytical heat transfer modeling of a new radiation calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Obame Ndong, Elysée [Department of Industrial Engineering and Maintenance, University of Sciences and Technology of Masuku (USTM), BP 941 Franceville (Gabon); Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Gallot-Lavallée, Olivier [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Aitken, Frédéric, E-mail: frederic.aitken@g2elab.grenoble-inp.fr [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France)

    2016-06-10

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  17. Analytical heat transfer modeling of a new radiation calorimeter

    International Nuclear Information System (INIS)

    Obame Ndong, Elysée; Gallot-Lavallée, Olivier; Aitken, Frédéric

    2016-01-01

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  18. Bubbles in inkjet printheads: analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  19. Bubbles in inkjet printheads : analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  20. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  1. Analytically based forward and inverse models of fluvial landscape evolution during temporally continuous climatic and tectonic variations

    Science.gov (United States)

    Goren, Liran; Petit, Carole

    2017-04-01

    Fluvial channels respond to changing tectonic and climatic conditions by adjusting their patterns of erosion and relief. It is therefore expected that by examining these patterns, we can infer the tectonic and climatic conditions that shaped the channels. However, the potential interference between climatic and tectonic signals complicates this inference. Within the framework of the stream power model that describes incision rate of mountainous bedrock rivers, climate variability has two effects: it influences the erosive power of the river, causing local slope change, and it changes the fluvial response time that controls the rate at which tectonically and climatically induced slope breaks are communicated upstream. Because of this dual role, the fluvial response time during continuous climate change has so far been elusive, which hinders our understanding of environmental signal propagation and preservation in the fluvial topography. An analytic solution of the stream power model during general tectonic and climatic histories gives rise to a new definition of the fluvial response time. The analytic solution offers accurate predictions for landscape evolution that are hard to achieve with classical numerical schemes and thus can be used to validate and evaluate the accuracy of numerical landscape evolution models. The analytic solution together with the new definition of the fluvial response time allow inferring either the tectonic history or the climatic history from river long profiles by using simple linear inversion schemes. Analytic study of landscape evolution during periodic climate change reveals that high frequency (10-100 kyr) climatic oscillations with respect to the response time, such as Milankovitch cycles, are not expected to leave significant fingerprints in the upstream reaches of fluvial channels. Linear inversion schemes are applied to the Tinee river tributaries in the southern French Alps, where tributary long profiles are used to recover the

  2. Analytical model of internally coupled ears

    DEFF Research Database (Denmark)

    Vossen, Christine; Christensen-Dalsgaard, Jakob; Leo van Hemmen, J

    2010-01-01

    Lizards and many birds possess a specialized hearing mechanism: internally coupled ears where the tympanic membranes connect through a large mouth cavity so that the vibrations of the tympanic membranes influence each other. This coupling enhances the phase differences and creates amplitude...... additionally provides the opportunity to incorporate the effect of the asymmetrically attached columella, which leads to the activation of higher membrane vibration modes. Incorporating this effect, the analytical model can explain measurements taken from the tympanic membrane of a living lizard, for example...

  3. A Proposed Analytical Model for Integrated Pick-and-Sort Systems

    Directory of Open Access Journals (Sweden)

    Recep KIZILASLAN

    2013-11-01

    Full Text Available In this study we present an analytical approach for integration of order picking and sortation operations which are the most important, labour intensive and costly activity for warehouses. Main aim is to investigate order picking and sorting efficiencies under different design issues as a function of order wave size. Integrated analytical model is proposed to estimate the optimum order picking and order sortation efficiency. The model, which has been tested by simulations with different illustrative examples, calculates the optimum wave size that solves the trade-off between picking and sorting operations and makes the order picking and sortations efficiency maximum. Our model also allow system designer to predict the order picking and sorting capacity for different system configurations. This study presents an innovative approach for integrated warehouse operations.

  4. Verification and validation of an actuator disc model

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Laan, van der, Paul Maarten; Troldborg, Niels

    2014-01-01

    reduce the computational cost of large wind farm wake simulations. The special case of the actuator disc is successfully validated with an analytical solution for heavily loaded turbines and with a full-rotor computation in computational fluid dynamics. Copyright © 2013 John Wiley & Sons, Ltd....... take any kind of shape discretization, determine the intersectional elements with the computational grid and use the size of these elements to redistribute proportionally the forces. This method can potentially reduce the need for mesh refinement in the region surrounding the rotor and, therefore, also...

  5. Cake filtration modeling: Analytical cake filtration model and filter medium characterization

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Michael

    2008-05-15

    Cake filtration is a unit operation to separate solids from fluids in industrial processes. The build up of a filter cake is usually accompanied with a decrease in overall permeability over the filter leading to an increased pressure drop over the filter. For an incompressible filter cake that builds up on a homogeneous filter cloth, a linear pressure drop profile over time is expected for a constant fluid volume flow. However, experiments show curved pressure drop profiles, which are also attributed to inhomogeneities of the filter (filter medium and/or residual filter cake). In this work, a mathematical filter model is developed to describe the relationship between time and overall permeability. The model considers a filter with an inhomogeneous permeability and accounts for fluid mechanics by a one-dimensional formulation of Darcy's law and for the cake build up by solid continuity. The model can be solved analytically in the time domain. The analytic solution allows for the unambiguous inversion of the model to determine the inhomogeneous permeability from the time resolved overall permeability, e.g. pressure drop measurements. An error estimation of the method is provided by rewriting the model as convolution transformation. This method is applied to simulated and experimental pressure drop data of gas filters with textile filter cloths and various situations with non-uniform flow situations in practical problems are explored. A routine is developed to generate characteristic filter cycles from semi-continuous filter plant operation. The model is modified to investigate the impact of non-uniform dust concentrations. (author). 34 refs., 40 figs., 1 tab

  6. Analytical and numerical models of uranium ignition assisted by hydride formation

    International Nuclear Information System (INIS)

    Totemeier, T.C.; Hayes, S.L.

    1996-01-01

    Analytical and numerical models of uranium ignition assisted by the oxidation of uranium hydride are described. The models were developed to demonstrate that ignition of large uranium ingots could not occur as a result of possible hydride formation during storage. The thermodynamics-based analytical model predicted an overall 17 C temperature rise of the ingot due to hydride oxidation upon opening of the storage can in air. The numerical model predicted locally higher temperature increases at the surface; the transient temperature increase quickly dissipated. The numerical model was further used to determine conditions for which hydride oxidation does lead to ignition of uranium metal. Room temperature ignition only occurs for high hydride fractions in the nominally oxide reaction product and high specific surface areas of the uranium metal

  7. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    Science.gov (United States)

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B

  8. Analytical modeling of the temporal evolution of hot spot temperatures in silicon solar cells

    Science.gov (United States)

    Wasmer, Sven; Rajsrima, Narong; Geisemeyer, Ino; Fertig, Fabian; Greulich, Johannes Michael; Rein, Stefan

    2018-03-01

    We present an approach to predict the equilibrium temperature of hot spots in crystalline silicon solar cells based on the analysis of their temporal evolution right after turning on a reverse bias. For this end, we derive an analytical expression for the time-dependent heat diffusion of a breakdown channel that is assumed to be cylindrical. We validate this by means of thermography imaging of hot spots right after turning on a reverse bias. The expression allows to be used to extract hot spot powers and radii from short-term measurements, targeting application in inline solar cell characterization. The extracted hot spot powers are validated at the hands of long-term dark lock-in thermography imaging. Using a look-up table of expected equilibrium temperatures determined by numerical and analytical simulations, we utilize the determined hot spot properties to predict the equilibrium temperatures of about 100 industrial aluminum back-surface field solar cells and achieve a high correlation coefficient of 0.86 and a mean absolute error of only 3.3 K.

  9. An analytically solvable model for rapid evolution of modular structure.

    Directory of Open Access Journals (Sweden)

    Nadav Kashtan

    2009-04-01

    Full Text Available Biological systems often display modularity, in the sense that they can be decomposed into nearly independent subsystems. Recent studies have suggested that modular structure can spontaneously emerge if goals (environments change over time, such that each new goal shares the same set of sub-problems with previous goals. Such modularly varying goals can also dramatically speed up evolution, relative to evolution under a constant goal. These studies were based on simulations of model systems, such as logic circuits and RNA structure, which are generally not easy to treat analytically. We present, here, a simple model for evolution under modularly varying goals that can be solved analytically. This model helps to understand some of the fundamental mechanisms that lead to rapid emergence of modular structure under modularly varying goals. In particular, the model suggests a mechanism for the dramatic speedup in evolution observed under such temporally varying goals.

  10. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  11. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni, muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS simulation performed by Particle and Heavy Ion Transport code System (PHITS. The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS. Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.

  12. A two-dimensional analytical well model with applications to groundwater flow and convective transport modelling in the geosphere

    International Nuclear Information System (INIS)

    Chan, T.; Nakka, B.W.

    1994-12-01

    A two-dimensional analytical well model has been developed to describe steady groundwater flow in an idealized, confined aquifer intersected by a withdrawal well. The aquifer comprises a low-dipping fracture zone. The model is useful for making simple quantitative estimates of the transport of contaminants along groundwater pathways in the fracture zone to the well from an underground source that intercepts the fracture zone. This report documents the mathematical development of the analytical well model. It outlines the assumptions and method used to derive an exact analytical solution, which is verified by two other methods. It presents expressions for calculating quantities such as streamlines (groundwater flow paths), fractional volumetric flow rates, contaminant concentration in well water and minimum convective travel time to the well. In addition, this report presents the results of applying the analytical model to a site-specific conceptual model of the Whiteshell Research Area in southeastern Manitoba, Canada. This hydrogeological model includes the presence of a 20-m-thick, low-dipping (18 deg) fracture zone (LD1) that intercepts the horizon of a hypothetical disposal vault located at a depth of 500 m. A withdrawal well intercepts LD1 between the vault level and the ground surface. Predictions based on parameters and boundary conditions specific to LD1 are presented graphically. The analytical model has specific applications in the SYVAC geosphere model (GEONET) to calculate the fraction of a plume of contaminants moving up the fracture zone that is captured by the well, and to describe the drawdown in the hydraulic head in the fracture zone caused by the withdrawal well. (author). 16 refs., 6 tabs., 35 figs

  13. Analytic Model Predictive Control of Uncertain Nonlinear Systems: A Fuzzy Adaptive Approach

    Directory of Open Access Journals (Sweden)

    Xiuyan Peng

    2015-01-01

    Full Text Available A fuzzy adaptive analytic model predictive control method is proposed in this paper for a class of uncertain nonlinear systems. Specifically, invoking the standard results from the Moore-Penrose inverse of matrix, the unmatched problem which exists commonly in input and output dimensions of systems is firstly solved. Then, recurring to analytic model predictive control law, combined with fuzzy adaptive approach, the fuzzy adaptive predictive controller synthesis for the underlying systems is developed. To further reduce the impact of fuzzy approximation error on the system and improve the robustness of the system, the robust compensation term is introduced. It is shown that by applying the fuzzy adaptive analytic model predictive controller the rudder roll stabilization system is ultimately uniformly bounded stabilized in the H-infinity sense. Finally, simulation results demonstrate the effectiveness of the proposed method.

  14. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  15. Analytical modeling and numerical optimization of the biosurfactants production in solid-state fermentation by Aspergillus fumigatus - doi: 10.4025/actascitechnol.v36i1.17818

    Directory of Open Access Journals (Sweden)

    Gabriel Castiglioni

    2014-01-01

    Full Text Available This is an experimental, analytical and numerical study to optimize the biosurfactants production in solid-state fermentation of a medium containing rice straw and minced rice bran inoculated with Aspergillus fumigatus. The goal of this work was to analytically model the biosurfactants production in solid-state fermentation into a column fixed bed bioreactor. The Least-Squares Method was used to adjust the emulsification activity experimental values to a quadratic function semi-empirical model. Control variables were nutritional conditions, the fermentation time and the aeration. The mathematical model is validated against experimental results and then used to predict the maximum emulsification activity for different nutritional conditions and aerations. Based on the semi-empirical model the maximum emulsification activity with no additional hydrocarbon sources was 8.16 UE·g-1 for 112 hours. When diesel oil was used the predicted maximum emulsification activity was 8.10 UE·g-1 for 108 hours.

  16. Simultaneous determination of renal function biomarkers in urine using a validated paper-based microfluidic analytical device.

    Science.gov (United States)

    Rossini, Eduardo Luiz; Milani, Maria Izabel; Carrilho, Emanuel; Pezza, Leonardo; Pezza, Helena Redigolo

    2018-01-02

    In this paper, we describe a validated paper-based microfluidic analytical device for the simultaneous quantification of two important biomarkers of renal function in urine. This paper platform provides an inexpensive, simple, and easy to use colorimetric method for the quantification of creatinine (CRN) and uric acid (UA) in urine samples. The microfluidic paper-based analytical device (μPAD) consists of a main channel with three identical arms, each containing a circular testing zone and a circular uptake zone. Creatinine detection is based on the Jaffé reaction, in which CRN reacts with picrate to form an orange-red product. Uric acid quantification is based on the reduction of Fe 3+ to Fe 2+ by UA, which is detected in a colorimetric reaction using 1,10-phenanthroline. Under optimum conditions, obtained through chemometrics, the concentrations of the analytes showed good linear correlations with the effective intensities, and the method presented satisfactory repeatability. The limits of detection and the linear ranges, respectively, were 15.7 mg L -1 and 50-600 mg L -1 for CRN and 16.5 mg L -1 and 50-500 mg L -1 for UA. There were no statistically significant differences between the results obtained using the μPAD and a chromatographic comparative method (Student's t-test at 95% confidence level). Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  19. Improved Analytical Model of a Permanent-Magnet Brushless DC Motor

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    In this paper, we develop a comprehensive model of a permanent-magnet brushless DC (BLDC) motor. An analytical model for determining instantaneous air-gap field density is developed. This instantaneous field distribution can be further used to determine the cogging torque, induced back electromotive

  20. Analytical development and optimization of a graphene–solution interface capacitance model

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2014-05-01

    Full Text Available Graphene, which as a new carbon material shows great potential for a range of applications because of its exceptional electronic and mechanical properties, becomes a matter of attention in these years. The use of graphene in nanoscale devices plays an important role in achieving more accurate and faster devices. Although there are lots of experimental studies in this area, there is a lack of analytical models. Quantum capacitance as one of the important properties of field effect transistors (FETs is in our focus. The quantum capacitance of electrolyte-gated transistors (EGFETs along with a relevant equivalent circuit is suggested in terms of Fermi velocity, carrier density, and fundamental physical quantities. The analytical model is compared with the experimental data and the mean absolute percentage error (MAPE is calculated to be 11.82. In order to decrease the error, a new function of E composed of α and β parameters is suggested. In another attempt, the ant colony optimization (ACO algorithm is implemented for optimization and development of an analytical model to obtain a more accurate capacitance model. To further confirm this viewpoint, based on the given results, the accuracy of the optimized model is more than 97% which is in an acceptable range of accuracy.

  1. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    OpenAIRE

    Zhang, Guofeng; Zhu, Hanjie

    2015-01-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the ...

  2. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  3. Organizational Models for Big Data and Analytics

    Directory of Open Access Journals (Sweden)

    Robert L. Grossman

    2014-04-01

    Full Text Available In this article, we introduce a framework for determining how analytics capability should be distributed within an organization. Our framework stresses the importance of building a critical mass of analytics staff, centralizing or decentralizing the analytics staff to support business processes, and establishing an analytics governance structure to ensure that analytics processes are supported by the organization as a whole.

  4. An Analytical Model for Prediction of Magnetic Flux Leakage from Surface Defects in Ferromagnetic Tubes

    Directory of Open Access Journals (Sweden)

    Suresh V.

    2016-02-01

    Full Text Available In this paper, an analytical model is proposed to predict magnetic flux leakage (MFL signals from the surface defects in ferromagnetic tubes. The analytical expression consists of elliptic integrals of first kind based on the magnetic dipole model. The radial (Bz component of leakage fields is computed from the cylindrical holes in ferromagnetic tubes. The effectiveness of the model has been studied by analyzing MFL signals as a function of the defect parameters and lift-off. The model predicted results are verified with experimental results and a good agreement is observed between the analytical and the experimental results. This analytical expression could be used for quick prediction of MFL signals and also input data for defect reconstructions in inverse MFL problem.

  5. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  6. Establishment of Approximate Analytical Model of Oil Film Force for Finite Length Tilting Pad Journal Bearings

    Directory of Open Access Journals (Sweden)

    Yongliang Wang

    2015-01-01

    Full Text Available Tilting pad bearings offer unique dynamic stability enabling successful deployment of high-speed rotating machinery. The model of dynamic stiffness, damping, and added mass coefficients is often used for rotordynamic analyses, and this method does not suffice to describe the dynamic behaviour due to the nonlinear effects of oil film force under larger shaft vibration or vertical rotor conditions. The objective of this paper is to present a nonlinear oil force model for finite length tilting pad journal bearings. An approximate analytic oil film force model was established by analysing the dynamic characteristic of oil film of a single pad journal bearing using variable separation method under the dynamic π oil film boundary condition. And an oil film force model of a four-tilting-pad journal bearing was established by using the pad assembly technique and considering pad tilting angle. The validity of the model established was proved by analyzing the distribution of oil film pressure and the locus of journal centre for tilting pad journal bearings and by comparing the model established in this paper with the model established using finite difference method.

  7. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  8. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  9. Molecular modeling of polymer composite-analyte interactions in electronic nose sensors

    Science.gov (United States)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K. S.

    2003-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL electronic nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereoisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings with no hydrogens. The Dreiding 2.21 force field is used for the polymer, solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4-vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic and organic compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites. c2003 Elsevier Science B.V. All rights reserved.

  10. Comparison of a semi-analytic and a CFD model uranium combustion to experimental data

    International Nuclear Information System (INIS)

    Clarksean, R.

    1998-01-01

    Two numerical models were developed and compared for the analysis of uranium combustion and ignition in a furnace. Both a semi-analytical solution and a computational fluid dynamics (CFD) numerical solution were obtained. Prediction of uranium oxidation rates is important for fuel storage applications, fuel processing, and the development of spent fuel metal waste forms. The semi-analytical model was based on heat transfer correlations, a semi-analytical model of flow over a flat surface, and simple radiative heat transfer from the material surface. The CFD model numerically determined the flowfield over the object of interest, calculated the heat and mass transfer to the material of interest, and calculated the radiative heat exchange of the material with the furnace. The semi-analytical model is much less detailed than the CFD model, but yields reasonable results and assists in understanding the physical process. Short computation times allowed the analyst to study numerous scenarios. The CFD model had significantly longer run times, was found to have some physical limitations that were not easily modified, but was better able to yield details of the heat and mass transfer and flow field once code limitations were overcome

  11. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Science.gov (United States)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  12. A simple analytical model for reactive particle ignition in explosives

    Energy Technology Data Exchange (ETDEWEB)

    Tanguay, Vincent [Defence Research and Development Canada - Valcartier, 2459 Pie XI Blvd. North, Quebec, QC, G3J 1X5 (Canada); Higgins, Andrew J. [Department of Mechanical Engineering, McGill University, 817 Sherbrooke St. West, Montreal, QC, H3A 2K6 (Canada); Zhang, Fan [Defence Research and Development Canada - Suffield, P. O. Box 4000, Stn Main, Medicine Hat, AB, T1A 8K6 (Canada)

    2007-10-15

    A simple analytical model is developed to predict ignition of magnesium particles in nitromethane detonation products. The flow field is simplified by considering the detonation products as a perfect gas expanding in a vacuum in a planar geometry. This simplification allows the flow field to be solved analytically. A single particle is then introduced in this flow field. Its trajectory and heating history are computed. It is found that most of the particle heating occurs in the Taylor wave and in the quiescent flow region behind it, shortly after which the particle cools. By considering only these regions, thereby considerably simplifying the problem, the flow field can be solved analytically with a more realistic equation of state (such as JWL) and a spherical geometry. The model is used to compute the minimum charge diameter for particle ignition to occur. It is found that the critical charge diameter for particle ignition increases with particle size. These results are compared to experimental data and show good agreement. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  13. HTS axial flux induction motor with analytic and FEA modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: alexlee.zn@gmail.com; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-11-15

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested.

  14. HTS axial flux induction motor with analytic and FEA modeling

    International Nuclear Information System (INIS)

    Li, S.; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-01-01

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested

  15. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  16. Analytical local electron-electron interaction model potentials for atoms

    International Nuclear Information System (INIS)

    Neugebauer, Johannes; Reiher, Markus; Hinze, Juergen

    2002-01-01

    Analytical local potentials for modeling the electron-electron interaction in an atom reduce significantly the computational effort in electronic structure calculations. The development of such potentials has a long history, but some promising ideas have not yet been taken into account for further improvements. We determine a local electron-electron interaction potential akin to those suggested by Green et al. [Phys. Rev. 184, 1 (1969)], which are widely used in atom-ion scattering calculations, electron-capture processes, and electronic structure calculations. Generalized Yukawa-type model potentials are introduced. This leads, however, to shell-dependent local potentials, because the origin behavior of such potentials is different for different shells as has been explicated analytically [J. Neugebauer, M. Reiher, and J. Hinze, Phys. Rev. A 65, 032518 (2002)]. It is found that the parameters that characterize these local potentials can be interpolated and extrapolated reliably for different nuclear charges and different numbers of electrons. The analytical behavior of the corresponding localized Hartree-Fock potentials at the origin and at long distances is utilized in order to reduce the number of fit parameters. It turns out that the shell-dependent form of Green's potential, which we also derive, yields results of comparable accuracy using only one shell-dependent parameter

  17. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  18. Gas-liquid Two Phase Flow Modelling of Incompressible Fluid and Experimental Validation Studies in Vertical Centrifugal Casting

    International Nuclear Information System (INIS)

    Zhou, J X; Shen, X; Yin, Y J; Guo, Z; Wang, H

    2015-01-01

    In this paper, Gas-liquid two phase flow mathematic models of incompressible fluid were proposed to explore the feature of fluid under certain centrifugal force in vertical centrifugal casting (VCC). Modified projection-level-set method was introduced to solve the mathematic models. To validate the simulation results, two methods were used in this study. In the first method, the simulation result of basic VCC flow process was compared with its analytic solution. The relationship between the numerical solution and deterministic analytic solution was presented to verify the correctness of numerical algorithms. In the second method, systematic water simulation experiments were developed. In this initial experiment, special experimental vertical centrifugal device and casting shapes were designed to describe typical mold-filling processes in VCC. High speed camera system and data collection devices were used to capture flow shape during the mold-filling process. Moreover, fluid characteristic at different rotation speed (from 40rpm, 60rpmand 80rpm) was discussed to provide comparative resource for simulation results. As compared with the simulation results, the proposed mathematical models could be proven and the experimental design could help us advance the accuracy of simulation and further studies for VCC. (paper)

  19. Learning, Learning Analytics, Activity Visualisation and Open learner Model

    DEFF Research Database (Denmark)

    Bull, Susan; Kickmeier-Rust, Michael; Vatrapu, Ravi

    2013-01-01

    This paper draws on visualisation approaches in learning analytics, considering how classroom visualisations can come together in practice. We suggest an open learner model in situations where many tools and activity visualisations produce more visual information than can be readily interpreted....

  20. A semi-analytic model of magnetized liner inertial fusion

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Ryan D.; Slutz, Stephen A. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2015-05-15

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primary fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.

  1. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  2. Validation for chromatographic and electrophoretic methods

    OpenAIRE

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  3. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  4. Analytical modeling of glucose biosensors based on carbon nanotubes.

    Science.gov (United States)

    Pourasl, Ali H; Ahmadi, Mohammad Taghi; Rahmani, Meisam; Chin, Huei Chaeng; Lim, Cheng Siong; Ismail, Razali; Tan, Michael Loong Peng

    2014-01-15

    In recent years, carbon nanotubes have received widespread attention as promising carbon-based nanoelectronic devices. Due to their exceptional physical, chemical, and electrical properties, namely a high surface-to-volume ratio, their enhanced electron transfer properties, and their high thermal conductivity, carbon nanotubes can be used effectively as electrochemical sensors. The integration of carbon nanotubes with a functional group provides a good and solid support for the immobilization of enzymes. The determination of glucose levels using biosensors, particularly in the medical diagnostics and food industries, is gaining mass appeal. Glucose biosensors detect the glucose molecule by catalyzing glucose to gluconic acid and hydrogen peroxide in the presence of oxygen. This action provides high accuracy and a quick detection rate. In this paper, a single-wall carbon nanotube field-effect transistor biosensor for glucose detection is analytically modeled. In the proposed model, the glucose concentration is presented as a function of gate voltage. Subsequently, the proposed model is compared with existing experimental data. A good consensus between the model and the experimental data is reported. The simulated data demonstrate that the analytical model can be employed with an electrochemical glucose sensor to predict the behavior of the sensing mechanism in biosensors.

  5. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  6. SPICE compatible analytical electron mobility model for biaxial strained-Si-MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhry, Amit; Sangwan, S. [UIET, Panjab University, Chandigarh (India); Roy, J. N., E-mail: amit_chaudhry01@yahoo.com [Solar Semiconductro Pvt. Ltd, Hyderabad (India)

    2011-05-15

    This paper describes an analytical model for bulk electron mobility in strained-Si layers as a function of strain. Phonon scattering, columbic scattering and surface roughness scattering are included to analyze the full mobility model. Analytical explicit calculations of all of the parameters to accurately estimate the electron mobility have been made. The results predict an increase in the electron mobility with the application of biaxial strain as also predicted from the basic theory of strain physics of metal oxide semiconductor (MOS) devices. The results have also been compared with numerically reported results and show good agreement. (semiconductor devices)

  7. SPICE compatible analytical electron mobility model for biaxial strained-Si-MOSFETs

    International Nuclear Information System (INIS)

    Chaudhry, Amit; Sangwan, S.; Roy, J. N.

    2011-01-01

    This paper describes an analytical model for bulk electron mobility in strained-Si layers as a function of strain. Phonon scattering, columbic scattering and surface roughness scattering are included to analyze the full mobility model. Analytical explicit calculations of all of the parameters to accurately estimate the electron mobility have been made. The results predict an increase in the electron mobility with the application of biaxial strain as also predicted from the basic theory of strain physics of metal oxide semiconductor (MOS) devices. The results have also been compared with numerically reported results and show good agreement. (semiconductor devices)

  8. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  9. Analytical Determining Of The Steinmetz Equivalent Diagram Elements Of Single-Phase Transformer

    Directory of Open Access Journals (Sweden)

    T. Aly Saandy

    2015-08-01

    Full Text Available This article presents to an analytical calculation methodology of the Steinmetz Equivalent Diagram Elements applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active and reactive powers consumed by the core are expressed analytically in function of the electromagnetic parameters as resistivity permeability and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The equivalent diagram elements empirically determined by Steinmetz are analytically expressed using the expressions of the no loaded transformer consumptions. To verify the relevance of the model validations both by simulations with different powers and measurements were carried out to determine the resistance and reactance of the core. The obtained results are in good agreement with the theoretical approach and the practical results.

  10. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  11. An analytical turn-on power loss model for 650-V GaN eHEMTs

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Shen, Zhan

    2018-01-01

    This paper proposes an improved analytical turn-on power loss model for 650-V GaN eHEMTs. The static characteristics, i.e., the parasitic capacitances and transconductance, are firstly modeled. Then the turn-on process is divided into multiple stages and analyzed in detail; as results, the time-d......-domain solutions to the drain-source voltage and drain current are obtained. Finally, double-pulse tests are conducted to verify the proposed power loss model. This analytical model enables an accurate and fast switching behavior characterization and power loss prediction.......This paper proposes an improved analytical turn-on power loss model for 650-V GaN eHEMTs. The static characteristics, i.e., the parasitic capacitances and transconductance, are firstly modeled. Then the turn-on process is divided into multiple stages and analyzed in detail; as results, the time...

  12. Analytic implementation of the GRAAL model: Application to a R7T7-type glass package in a geological disposal environment

    Energy Technology Data Exchange (ETDEWEB)

    Minet, Y., E-mail: yves.minet@cea.f [CEA Marcoule, DTCD/SECM/LCLT, BP 17171, 30207 Bagnols-sur-Ceze Cedex (France); Bonin, B. [CEA Saclay, DEN/DIR/DS, 91191 Gif-sur-Yvette Cedex (France); Gin, S.; Frugier, P. [CEA Marcoule, DTCD/SECM/LCLT, BP 17171, 30207 Bagnols-sur-Ceze Cedex (France)

    2010-09-30

    The Glass Reactivity with Allowance for the Alteration Layer Model (GRAAL) was proposed in 2008 to describe borosilicate nuclear glass alteration based on coupling an affinity law with the formation and dissolution of a passivating reactive interface. It is examined here in a simplified form in which only the affinity with respect to silicon is taken into account with a concentration at saturation C{sub sat}, and the precipitation of neoformed phases is described by an affine relation for silicon above a precipitation threshold C{sub sat}{sup '}. This simplified 'analytical GRAAL' model is capable of predicting the quantities of altered glass and the silicon and boron concentration variations in analytical or semi-analytical form, and thereby identify the main characteristic quantities of the system. The model was tested against a series of laboratory experiments lasting from a few days to a few years; its sensitivity to the parameter values was examined, and the model was validated with respect to SON68 glass alteration in initially pure water. It was then applied to the alteration of a glass package in a repository over periods of up to a million years, by means of exploratory calculations comprising a sensitivity study of the internal model parameters and extrapolation to the temperatures expected in a geological repository in order to identify the parameters and mechanisms having the greatest impact on the residual alteration rate. Alteration is controlled by the precipitation of neoformed phases in every case. The transient conditions are of very limited duration with respect to either silicon or boron (no more than a 100 years, with less than 0.01% alteration of the package). In the precipitation law used in the model, the residual alteration rate and total package lifetime are determined primarily by two parameters: k' (the precipitation kinetics) and {sigma}' (the precipitate surface area per unit volume in the geological barrier

  13. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  14. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  15. Analytical Model for Diffusive Evaporation of Sessile Droplets Coupled with Interfacial Cooling Effect.

    Science.gov (United States)

    Nguyen, Tuan A H; Biggs, Simon R; Nguyen, Anh V

    2018-05-30

    Current analytical models for sessile droplet evaporation do not consider the nonuniform temperature field within the droplet and can overpredict the evaporation by 20%. This deviation can be attributed to a significant temperature drop due to the release of the latent heat of evaporation along the air-liquid interface. We report, for the first time, an analytical solution of the sessile droplet evaporation coupled with this interfacial cooling effect. The two-way coupling model of the quasi-steady thermal diffusion within the droplet and the quasi-steady diffusion-controlled droplet evaporation is conveniently solved in the toroidal coordinate system by applying the method of separation of variables. Our new analytical model for the coupled vapor concentration and temperature fields is in the closed form and is applicable for a full range of spherical-cap shape droplets of different contact angles and types of fluids. Our analytical results are uniquely quantified by a dimensionless evaporative cooling number E o whose magnitude is determined only by the thermophysical properties of the liquid and the atmosphere. Accordingly, the larger the magnitude of E o , the more significant the effect of the evaporative cooling, which results in stronger suppression on the evaporation rate. The classical isothermal model is recovered if the temperature gradient along the air-liquid interface is negligible ( E o = 0). For substrates with very high thermal conductivities (isothermal substrates), our analytical model predicts a reversal of temperature gradient along the droplet-free surface at a contact angle of 119°. Our findings pose interesting challenges but also guidance for experimental investigations.

  16. Bio-analytical method development and validation of Rasagiline by high performance liquid chromatography tandem mass spectrometry detection and its application to pharmacokinetic study

    Directory of Open Access Journals (Sweden)

    Ravi Kumar Konda

    2012-10-01

    Full Text Available The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. Keywords: High performance liquid chromatography, Mass spectrometry, Rasagiline, Liquid–liquid extraction

  17. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  18. Analytical Business Model for Sustainable Distributed Retail Enterprises in a Competitive Market

    Directory of Open Access Journals (Sweden)

    Courage Matobobo

    2016-02-01

    Full Text Available Retail enterprises are organizations that sell goods in small quantities to consumers for personal consumption. In distributed retail enterprises, data is administered per branch. It is important for retail enterprises to make use of data generated within the organization to determine consumer patterns and behaviors. Large organizations find it difficult to ascertain customer preferences by merely observing transactions. This has led to quantifiable losses, such as loss of market share to competitors and targeting the wrong market. Although some enterprises have implemented classical business models to address these challenging issues, they still lack analytics-based marketing programs to gain a competitive advantage to deal with likely catastrophic events. This research develops an analytical business (ARANN model for distributed retail enterprises in a competitive market environment to address the current laxity through the best arrangement of shelf products per branch. The ARANN model is built on association rules, complemented by artificial neural networks to strengthen the results of both mutually. According to experimental analytics, the ARANN model outperforms the state of the art model, implying improved confidence in business information management within the dynamically changing world economy.

  19. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    Science.gov (United States)

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  20. An analytically resolved model of a potato's thermal processing using Heun functions

    Science.gov (United States)

    Vargas Toro, Agustín.

    2014-05-01

    A potato's thermal processing model is solved analytically. The model is formulated using the equation of heat diffusion in the case of a spherical potato processed in a furnace, and assuming that the potato's thermal conductivity is radially modulated. The model is solved using the method of the Laplace transform, applying Bromwich Integral and Residue Theorem. The temperatures' profile in the potato is presented as an infinite series of Heun functions. All computations are performed with computer algebra software, specifically Maple. Using the numerical values of the thermal parameters of the potato and geometric and thermal parameters of the processing furnace, the time evolution of the temperatures in different regions inside the potato are presented analytically and graphically. The duration of thermal processing in order to achieve a specified effect on the potato is computed. It is expected that the obtained analytical results will be important in food engineering and cooking engineering.

  1. Negations in syllogistic reasoning: evidence for a heuristic-analytic conflict.

    Science.gov (United States)

    Stupple, Edward J N; Waterhouse, Eleanor F

    2009-08-01

    An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails.

  2. Validation of an analytical method for the determination of the sodium content in foods

    International Nuclear Information System (INIS)

    Valverde Montero, Ericka; Silva Trejos, Paulina

    2012-01-01

    The analytical methodology for quantitative determination of sodium in foods by flame atomic absorption spectrometry was validated. The samples of 0,5 g was realized by microwave oven with 5,0 mL of nitric acid (HNO 3 ) to 65% by mass. The linearity range has been from 0,043 mg/L to 0,70 mg/L with a correlation coefficient equal to 0,998. The detection and quantification limits have reported 0,025 mg/L and 0,043 mg/L, respectively; with 0,805 Lmg -1 of calibration sensitivity and 44 Lmg -1 of analytical sensitivity. The precision was evaluated in terms of repeatability and have obtained a value equal to 2,9% RDS r . The trueness was determined using three NIST ® , certified standards SRM 1846 Infant Formula with a reported value for sodium of (2310 ± 130) mg/kg, SRM 8414 Bovine Muscle Powder with a reported value for sodium of (0,210 ± 0,008)% and SRM 8415 Whole Egg Powder with a reported value for sodium of (0,377 ± 0,034)% by mass. The bias have obtained an average between(-0,010 to 0,009) mg/L. From the list of foods that were selected for the study, for example, whole milk powder, white wheat bread, fresh cheese and mozzarella cheese have submitted highest content in sodium concentrations, ranging from (106 to 452) mg Na /100g. (author) [es

  3. Development and validation of an analytical method for quality control and the stability of the eyedrops 10 % Phenylephrine and the 1 % Tropicamide

    International Nuclear Information System (INIS)

    Garcia Penna, Caridad Margarita; Botet Garcia, Martha; Troche Concepcion, Yenilen

    2011-01-01

    An analytical high-performance liquid chromatography method was developed and validated applicable to quality control and to stability study of 10 % phenylephrine plus eyedrops 1 % tropicamide. To quantify simultaneously both active principles in the finished product, separation was carried out through a Lichrosorb RP-18 (15 μm) (260 x 4 mm) column chromatography, with ultraviolet detection at 253 nm using the mobile phase composed of methanol: distilled water (1:1), with 1.1 g of sodium 1-octasulfanate by litre and pH fitted to 3.0 with phosphoric acid and the quantification of this front to a reference sample using the external standard method. The analytical method developed was linear, precise, specific and accurate in the rank of study concentrations, established for the quality control and stability study of the finished product since there were not analytical methods designed for these aims

  4. A new DG nanoscale TFET based on MOSFETs by using source gate electrode: 2D simulation and an analytical potential model

    Science.gov (United States)

    Ramezani, Zeinab; Orouji, Ali A.

    2017-08-01

    This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.

  5. Analytic models for the evolution of semilocal string networks

    International Nuclear Information System (INIS)

    Nunes, A. S.; Martins, C. J. A. P.; Avgoustidis, A.; Urrestilla, J.

    2011-01-01

    We revisit previously developed analytic models for defect evolution and adapt them appropriately for the study of semilocal string networks. We thus confirm the expectation (based on numerical simulations) that linear scaling evolution is the attractor solution for a broad range of model parameters. We discuss in detail the evolution of individual semilocal segments, focusing on the phenomenology of segment growth, and also provide a preliminary comparison with existing numerical simulations.

  6. SU-E-T-378: Evaluation of An Analytical Model for the Inter-Seed Attenuation Effect in 103-Pd Multi-Seed Implant Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Safigholi, H; Soliman, A; Song, W [Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, U of T, Toronto, Ontario (Canada); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States); Han, D [Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, U of T, Toronto, Ontario (Canada); University of California, San Diego, La Jolla, CA (United States)

    2015-06-15

    Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summed using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.

  7. Analytical model for shape anisotropy in thin-film nanostructured arrays: Interaction effects

    International Nuclear Information System (INIS)

    Alvarez-Sanchez, R.; Costa-Kraemer, J.L.; Briones, F.

    2006-01-01

    When reducing the size of array elements and interelement separations to the nanoscale, long-range magnetostatic interactions become important. A methodology that extends the study of conventional single-element magnetostatics is presented, adding the effect of stacking nanoelements into close proximity in arrays and the consequent interaction effects. This would be very time consuming to model by micromagnetic simulations that are also very vulnerable to artifacts due to cell or boundary condition selection. The proposed method considers an analytical expression valid for short interelement separations and not very costly to evaluate by computational means. This approach allows the quantitative study of shape anisotropy in non-square-shaped arrays. It is also shown how it can be used to find anisotropy compensation conditions, where an anisotropy due to a magnetic element shape can be compensated by the shape anisotropy due to the array. The obtained results can be used to establish a criterion for the minimum number of elements to be considered for a micromagnetic simulation of an array to be realistic depending on the element size and separation

  8. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  9. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Enhanced fluorescence sensitivity by coupling yttrium-analyte complexes and three-way fast high-performance liquid chromatography data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Alcaraz, Mirta R.; Culzoni, María J., E-mail: mculzoni@fbcb.unl.edu.ar; Goicoechea, Héctor C., E-mail: hgoico@fbcb.unl.edu.ar

    2016-01-01

    The present study reports a sensitive chromatographic method for the analysis of seven fluoroquinolones (FQs) in environmental water samples, by coupling yttrium-analyte complex and three-way chromatographic data modeling. This method based on the use of HPLC-FSFD does not require complex or tedious sample treatments or enrichment processes before the analysis, due to the significant fluorescence increments of the analytes reached by the presence of Y{sup 3+}. Enhancement achieved for the FQs signals obtained after Y{sup 3+} addition reaches 103- to 1743-fold. Prediction results corresponding to the application of MCR-ALS to the validation set showed relative error of prediction (REP%) values below 10% in all cases. A recovery study that includes the simultaneous determination of the seven FQs in three different environmental aqueous matrices was conducted. The recovery studies assert the efficiency and the accuracy of the proposed method. The LOD values calculated are in the order of part per trillion (below 0.5 ng mL{sup −1} for all the FQs, except for enoxacin). It is noteworthy to mention that the method herein proposed, which does not include pre-concentration steps, allows reaching LOD values in the same order of magnitude than those achieved by more sophisticated methods based on SPE and UHPLC-MS/MS. - Highlights: • Highly sensitive method for the analysis of seven fluoroquinolones. • Coupling of yttrium-analyte complex and three-way modeling. • Complex or tedious sample treatments or enrichment processes are nor required. • Accuracy on the quantitation of fluoroquinolones in real water river samples.

  11. Numerical simulation and experimental validation of the three-dimensional flow field and relative analyte concentration distribution in an atmospheric pressure ion source.

    Science.gov (United States)

    Poehler, Thorsten; Kunte, Robert; Hoenen, Herwart; Jeschke, Peter; Wissdorf, Walter; Brockmann, Klaus J; Benter, Thorsten

    2011-11-01

    In this study, the validation and analysis of steady state numerical simulations of the gas flows within a multi-purpose ion source (MPIS) are presented. The experimental results were obtained with particle image velocimetry (PIV) measurements in a non-scaled MPIS. Two-dimensional time-averaged velocity and turbulent kinetic energy distributions are presented for two dry gas volume flow rates. The numerical results of the validation simulations are in very good agreement with the experimental data. All significant flow features have been correctly predicted within the accuracy of the experiments. For technical reasons, the experiments were conducted at room temperature. Thus, numerical simulations of ionization conditions at two operating points of the MPIS are also presented. It is clearly shown that the dry gas volume flow rate has the most significant impact on the overall flow pattern within the APLI source; far less critical is the (larger) nebulization gas flow. In addition to the approximate solution of Reynolds-Averaged Navier-Stokes equations, a transport equation for the relative analyte concentration has been solved. The results yield information on the three-dimensional analyte distribution within the source. It becomes evident that for ion transport into the MS ion transfer capillary, electromagnetic forces are at least as important as fluid dynamic forces. However, only the fluid dynamics determines the three-dimensional distribution of analyte gas. Thus, local flow phenomena in close proximity to the spray shield are strongly impacting on the ionization efficiency.

  12. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  13. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  14. ANALYTICAL AND SIMULATION PLANNING MODEL OF URBAN PASSENGER TRANSPORT

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article described the structure of the analytical and simulation models to make informed decisions in the planning of urban passenger transport. Designed UML diagram that describes the relationship of classes of the proposed model. A description of the main agents of the model developed in the simulation AnyLogic. Designed user interface integration with GIS map. Also provides simulation results that allow concluding about her health and the possibility of its use in solving planning problems of urban passenger transport.

  15. Analytic Models for Sunlight Charging of a Rapidly Spinning Satellite

    National Research Council Canada - National Science Library

    Tautz, Maurice

    2003-01-01

    ... photoelectrons can be blocked by local potential barriers. In this report, we discuss two analytic models for sunlight charging of a rapidly spinning spherical satellite, both of which are based on blocked photoelectron currents...

  16. Analytic Approximations for Soliton Solutions of Short-Wave Models for Camassa-Holm and Degasperis-Procesi Equations

    International Nuclear Information System (INIS)

    Yang Pei; Li Zhibin; Chen Yong

    2010-01-01

    In this paper, the short-wave model equations are investigated, which are associated with the Camassa-Holm (CH) and Degasperis-Procesi (DP) shallow-water wave equations. Firstly, by means of the transformation of the independent variables and the travelling wave transformation, the partial differential equation is reduced to an ordinary differential equation. Secondly, the equation is solved by homotopy analysis method. Lastly, by the transformations back to the original independent variables, the solution of the original partial differential equation is obtained. The two types of solutions of the short-wave models are obtained in parametric form, one is one-cusp soliton for the CH equation while the other one is one-loop soliton for the DP equation. The approximate analytic solutions expressed by a series of exponential functions agree well with the exact solutions. It demonstrates the validity and great potential of homotopy analysis method for complicated nonlinear solitary wave problems. (general)

  17. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  18. Development and validation of an Eulerian model towards the simulation of fuel injection in internal combustion engines; Developpement et validation d'un modele eulerien en vue de la simulation des jets de carburants dans les moteurs a combustion interne

    Energy Technology Data Exchange (ETDEWEB)

    Truchot, B.

    2005-12-15

    The objective of this work is to develop an Eulerian two phase model to improve the prediction of fuel injection in internal combustion engines, particularly the dense liquid zone close to the nozzle. Lagrangian models, usually used in engine simulations, are based on the assumption of dispersed two phase flows with low liquid volume fraction, which is not fulfilled in the case of direct injection engine technology. Different Eulerian approaches are available in the literature. Physical phenomena that occur near the nozzle and characteristics of each model lead to the choice of a two fluids two pressures model. Several open terms appear in the equations of the model: exchange between the two phases and turbulent correlations. Closures of exchange terms are based on the spherical droplets hypothesis while a RANS approach is adopted to close turbulent correlations. This model has been integrated in the IFP CFD code, IFP-C3D. Several numerical tests and analytical validations (for single and two phase flows) have been then carried out in order to check the correct implementation of equations and the predictivity of the model and closures. Modifications in the turbulent model of the gas have required validations in both the gas phase (flow behind a sudden enlargement) and the liquid phase (pure liquid injection). A two phase mixing layer has been then used to validate the whole model. Finally, injection tests have been achieved under realistic conditions (similar to those encountered in automotive engines) in order to check the feasibility of engine computations using the developed Eulerian approach. These tests have also allowed to check the compatibility of this approach with the specificities of engine simulations (especially mesh movement). (author)

  19. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  20. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....

  1. An analytical model for enantioseparation process in capillary electrophoresis

    Science.gov (United States)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  2. Coulombian Model for 3D Analytical Calculation of the Torque Exerted on Cuboidal Permanent Magnets with Arbitrarly Oriented Polarizations

    OpenAIRE

    Allag , Hicham; Yonnet , Jean-Paul; Latreche , Mohamed E. H.; Bouchekara , Houssem

    2011-01-01

    International audience; The paper proposes improved analytical expressions of the torque on cuboidal permanent magnets. Expressions are valid for any relative magnet position and for any polarization direction. The analytical calculation is made by replacing polarizations by distributions of magnetic charges on the magnet poles (Coulombian approach). The torque exerted on the second magnet is calculated by Lorentz force formulas for any arbitrary position. The three components of the torque a...

  3. Analytical thermal modelling of multilayered active embedded chips into high density electronic board

    Directory of Open Access Journals (Sweden)

    Monier-Vinard Eric

    2013-01-01

    Full Text Available The recent Printed Wiring Board embedding technology is an attractive packaging alternative that allows a very high degree of miniaturization by stacking multiple layers of embedded chips. This disruptive technology will further increase the thermal management challenges by concentrating heat dissipation at the heart of the organic substrate structure. In order to allow the electronic designer to early analyze the limits of the power dissipation, depending on the embedded chip location inside the board, as well as the thermal interactions with other buried chips or surface mounted electronic components, an analytical thermal modelling approach was established. The presented work describes the comparison of the analytical model results with the numerical models of various embedded chips configurations. The thermal behaviour predictions of the analytical model, found to be within ±10% of relative error, demonstrate its relevance for modelling high density electronic board. Besides the approach promotes a practical solution to study the potential gain to conduct a part of heat flow from the components towards a set of localized cooled board pads.

  4. Development and validation of HPLC analytical method for quantitative determination of metronidazole in human plasma

    International Nuclear Information System (INIS)

    Safdar, K.A.; Shyum, S.B.; Usman, S.

    2016-01-01

    The objective of the present study was to develop a simple, rapid and sensitive reversed-phase high performance liquid chromatographic (RP-HPLC) analytical method with UV detection system for the quantitative determination of metronidazole in human plasma. The chromatographic separation was performed by using C18 RP column (250mm X 4.6mm, 5 meu m) as stationary phase and 0.01M potassium dihydrogen phosphate buffered at pH 3.0 and acetonitrile (83:17, v/v) as mobile phase at flow rate of 1.0 ml/min. The UV detection was carried out at 320nm. The method was validated as per the US FDA guideline for bioanalytical method validation and was found to be selective without interferences from mobile phase components, impurities and biological matrix. The method found to be linear over the concentration range of 0.2812 meu g/ml to 18.0 meu g/ml (r2 = 0.9987) with adequate level of accuracy and precision. The samples were found to be stable under various recommended laboratory and storage conditions. Therefore, the method can be used with adequate level of confidence and assurance for bioavailability, bioequivalence and other pharmacokinetic studies of metronidazole in human. (author)

  5. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  6. JOINT ANALYSIS OF X-RAY AND SUNYAEV-ZEL'DOVICH OBSERVATIONS OF GALAXY CLUSTERS USING AN ANALYTIC MODEL OF THE INTRACLUSTER MEDIUM

    International Nuclear Information System (INIS)

    Hasler, Nicole; Bulbul, Esra; Bonamente, Massimiliano; Landry, David; Carlstrom, John E.; Culverhouse, Thomas L.; Gralla, Megan; Greer, Christopher; Hennessy, Ryan; Leitch, Erik M.; Mantz, Adam; Marrone, Daniel P.; Plagge, Thomas; Hawkins, David; Lamb, James W.; Muchovej, Stephen; Joy, Marshall; Kolodziejczak, Jeffery; Miller, Amber; Mroczkowski, Tony

    2012-01-01

    We perform a joint analysis of X-ray and Sunyaev-Zel'dovich effect data using an analytic model that describes the gas properties of galaxy clusters. The joint analysis allows the measurement of the cluster gas mass fraction profile and Hubble constant independent of cosmological parameters. Weak cosmological priors are used to calculate the overdensity radius within which the gas mass fractions are reported. Such an analysis can provide direct constraints on the evolution of the cluster gas mass fraction with redshift. We validate the model and the joint analysis on high signal-to-noise data from the Chandra X-ray Observatory and the Sunyaev-Zel'dovich Array for two clusters, A2631 and A2204.

  7. Theoretical and Experimental Study of Optical Coherence Tomography (OCT) Signals Using an Analytical Transport Model

    International Nuclear Information System (INIS)

    Vazquez Villa, A.; Delgado Atencio, J. A.; Vazquez y Montiel, S.; Cunill Rodriguez, M.; Martinez Rodriguez, A. E.; Ramos, J. Castro; Villanueva, A.

    2010-01-01

    Optical coherence tomography (OCT) is a non-invasive low coherent interferometric technique that provides cross-sectional images of turbid media. OCT is based on the classical Michelson interferometer where the mirror of the reference arm is oscillating and the signal arm contains a biological sample. In this work, we analyzed theoretically the heterodyne optical signal adopting the so called extended Huygens-Fresnel principle (EHFP). We use simulated OCT images with known optical properties to test an algorithm developed by ourselves to recover the scattering coefficient and we recovered the scattering coefficient with a relative error less than 5% for noisy signals. In addition, we applied this algorithm to OCT images from phantoms of known optical properties; in this case curves were indistinguishable. A revision of the validity of the analytical model applied to our system should be done.

  8. Foam for Enhanced Oil Recovery : Modeling and Analytical Solutions

    NARCIS (Netherlands)

    Ashoori, E.

    2012-01-01

    Foam increases sweep in miscible- and immiscible-gas enhanced oil recovery by decreasing the mobility of gas enormously. This thesis is concerned with the simulations and analytical solutions for foam flow for the purpose of modeling foam EOR in a reservoir. For the ultimate goal of upscaling our

  9. Analytical model of SiPM time resolution and order statistics with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2015-01-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented

  10. Analytical model of SiPM time resolution and order statistics with crosstalk

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)

    2015-07-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.

  11. An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality

    Science.gov (United States)

    Yin, Qingli

    2013-01-01

    Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…

  12. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection

    Science.gov (United States)

    Cross, Robert W.; Boisen, Matthew L.; Millett, Molly M.; Nelson, Diana S.; Oottamasathien, Darin; Hartnett, Jessica N.; Jones, Abigal B.; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A.; Fusco, Marnie L.; Abelson, Dafna M.; Oda, Shunichiro; Brown, Bethany L.; Pham, Ha; Rowland, Megan M.; Agans, Krystle N.; Geisbert, Joan B.; Heinrich, Megan L.; Kulakosky, Peter C.; Shaffer, Jeffrey G.; Schieffelin, John S.; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M.; Wilson, Russell B.; Saphire, Erica Ollmann; Pitts, Kelly R.; Khan, Sheik Humarr; Grant, Donald S.; Geisbert, Thomas W.; Branco, Luis M.; Garry, Robert F.

    2016-01-01

    Background. Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013–2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases. Methods. Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance. Results. The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription–polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 105–9.0 × 108 genomes/mL. Conclusions. The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. PMID:27587634

  13. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.

  14. Validating clustering of molecular dynamics simulations using polymer models

    Directory of Open Access Journals (Sweden)

    Phillips Joshua L

    2011-11-01

    Full Text Available Abstract Background Molecular dynamics (MD simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our

  15. An analytical model of flagellate hydrodynamics

    International Nuclear Information System (INIS)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders

    2017-01-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming trajectory. We find that the longitudinal flagellum is responsible for the average translational motion whereas the transversal flagellum governs the rotational motion. Finally, we show that the transversal flagellum can lead to strong feeding currents to localized capture sites on the cell surface. (paper)

  16. Core monitoring with analytical model adaption

    International Nuclear Information System (INIS)

    Linford, R.B.; Martin, C.L.; Parkos, G.R.; Rahnema, F.; Williams, R.D.

    1992-01-01

    The monitoring of BWR cores has evolved rapidly due to more capable computer systems, improved analytical models and new types of core instrumentation. Coupling of first principles diffusion theory models such as applied to design to the core instrumentation has been achieved by GE with an adaptive methodology in the 3D Minicore system. The adaptive methods allow definition of 'leakage parameters' which are incorporated directly into the diffusion models to enhance monitoring accuracy and predictions. These improved models for core monitoring allow for substitution of traversing in-core probe (TIP) and local power range monitor (LPRM) with calculations to continue monitoring with no loss of accuracy or reduction of thermal limits. Experience in small BWR cores has shown that with one out of three TIP machines failed there was no operating limitation or impact from the substitute calculations. Other capabilities exist in 3D Monicore to align TIPs more accurately and accommodate other types of system measurements or anomalies. 3D Monicore also includes an accurate predictive capability which uses the adaptive results from previous monitoring calculations and is used to plan and optimize reactor maneuvers/operations to improve operating efficiency and reduce support requirements

  17. A factor analytic investigation of the Tripartite model of affect in a clinical sample of young Australians

    Directory of Open Access Journals (Sweden)

    Cosgrave Elizabeth M

    2008-09-01

    Full Text Available Abstract Background The Mood and Anxiety Symptom Questionnaire (MASQ was designed to specifically measure the Tripartite model of affect and is proposed to offer a delineation between the core components of anxiety and depression. Factor analytic data from adult clinical samples has shown mixed results; however no studies employing confirmatory factor analysis (CFA have supported the predicted structure of distinct Depression, Anxiety and General Distress factors. The Tripartite model has not been validated in a clinical sample of older adolescents and young adults. The aim of the present study was to examine the validity of the Tripartite model using scale-level data from the MASQ and correlational and confirmatory factor analysis techniques. Methods 137 young people (M = 17.78, SD = 2.63 referred to a specialist mental health service for adolescents and young adults completed the MASQ and diagnostic interview. Results All MASQ scales were highly inter-correlated, with the lowest correlation between the depression- and anxiety-specific scales (r = .59. This pattern of correlations was observed for all participants rating for an Axis-I disorder but not for participants without a current disorder (r = .18. Confirmatory factor analyses were conducted to evaluate the model fit of a number of solutions. The predicted Tripartite structure was not supported. A 2-factor model demonstrated superior model fit and parsimony compared to 1- or 3-factor models. These broad factors represented Depression and Anxiety and were highly correlated (r = .88. Conclusion The present data lend support to the notion that the Tripartite model does not adequately explain the relationship between anxiety and depression in all clinical populations. Indeed, in the present study this model was found to be inappropriate for a help-seeking community sample of older adolescents and young adults.

  18. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    Science.gov (United States)

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  19. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  20. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  1. Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.

    Science.gov (United States)

    Dasbach, Erik J; Elbasha, Elamin H

    2017-07-01

    Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.

  2. Magnetic saturation in semi-analytical harmonic modeling for electric machine analysis

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.

    2016-01-01

    A semi-analytical method based on the harmonic modeling (HM) technique is presented for the analysis of the magneto-static field distribution in the slotted structure of rotating electric machines. In contrast to the existing literature, the proposed model does not require the assumption of infinite

  3. A comparison of galaxy group luminosity functions from semi-analytic models

    NARCIS (Netherlands)

    Snaith, Owain N.; Gibson, Brad K.; Brook, Chris B.; Courty, Stéphanie; Sánchez-Blázquez, Patricia; Kawata, Daisuke; Knebe, Alexander; Sales, Laura V.

    Semi-analytic models (SAMs) are currently one of the primary tools with which we model statistically significant ensembles of galaxies. The underlying physical prescriptions inherent to each SAM are, in many cases, different from one another. Several SAMs have been applied to the dark matter merger

  4. An analytical model for the performance of geographical multi-hop broadcast

    NARCIS (Netherlands)

    Klein Wolterink, W.; Heijenk, G.; Berg, J.L. van den

    2012-01-01

    In this paper we present an analytical model accurately describing the behaviour of a multi-hop broadcast protocol. Our model covers the scenario in which a message is forwarded over a straight road and inter-node distances are distributed exponentially. Intermediate forwarders draw a small random

  5. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  6. Experimental, numerical, and analytical studies on the seismic response of steel-plate concrete (SC) composite shear walls

    Science.gov (United States)

    Epackachi, Siamak

    The seismic performance of rectangular steel-plate concrete (SC) composite shear walls is assessed for application to buildings and mission-critical infrastructure. The SC walls considered in this study were composed of two steel faceplates and infill concrete. The steel faceplates were connected together and to the infill concrete using tie rods and headed studs, respectively. The research focused on the in-plane behavior of flexure- and flexure-shear-critical SC walls. An experimental program was executed in the NEES laboratory at the University at Buffalo and was followed by numerical and analytical studies. In the experimental program, four large-size specimens were tested under displacement-controlled cyclic loading. The design variables considered in the testing program included wall thickness, reinforcement ratio, and slenderness ratio. The aspect ratio (height-to-length) of the four walls was 1.0. Each SC wall was installed on top of a re-usable foundation block. A bolted baseplate to RC foundation connection was used for all four walls. The walls were identified to be flexure- and flexure-shear critical. The progression of damage in the four walls was identical, namely, cracking and crushing of the infill concrete at the toes of the walls, outward buckling and yielding of the steel faceplates near the base of the wall, and tearing of the faceplates at their junctions with the baseplate. A robust finite element model was developed in LS-DYNA for nonlinear cyclic analysis of the flexure- and flexure-shear-critical SC walls. The DYNA model was validated using the results of the cyclic tests of the four SC walls. The validated and benchmarked models were then used to conduct a parametric study, which investigated the effects of wall aspect ratio, reinforcement ratio, wall thickness, and uniaxial concrete compressive strength on the in-plane response of SC walls. Simplified analytical models, suitable for preliminary analysis and design of SC walls, were

  7. A splitting technique for analytical modelling of two-phase multicomponent flow in porous media

    DEFF Research Database (Denmark)

    Pires, A.P.; Bedrikovetsky, P.G.; Shapiro, Alexander

    2006-01-01

    In this paper we discuss one-dimensional models for two-phase Enhanced Oil Recovery (EOR) floods (oil displacement by gases, polymers, carbonized water, hot water, etc.). The main result presented here is the splitting of the EOR mathematical model into thermodynamical and hydrodynamical parts...... formation water for chemical flooding can be calculated from the reduced auxiliary system. Reduction of the number of equations allows the generation of new analytical models for EOR. The analytical model for displacement of oil by a polymer slug with water drive is presented....

  8. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    Science.gov (United States)

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  9. A Literature Survey and Experimental Evaluation of the State-of-the-Art in Uplift Modeling: A Stepping Stone Toward the Development of Prescriptive Analytics.

    Science.gov (United States)

    Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter

    2018-03-01

    Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.

  10. Analytical model for local scour prediction around hydrokinetic turbine foundations

    Science.gov (United States)

    Musa, M.; Heisel, M.; Hill, C.; Guala, M.

    2017-12-01

    Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi

  11. The Greenhouse effect within an analytic model of the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Dehnen, Heinz [Konstanz Univ. (Germany). Fachbereich Physik

    2009-01-15

    Within a simplified atmospheric model the greenhouse effect is treated by analytical methods starting from physical first principles. The influence of solar radiation, absorption cross sections of the greenhouse molecules, and cloud formation on the earth's temperature is shown and discussed explicitly by mathematical formulae in contrast to the climate simulations. The application of our analytical results on the production of 20 .10{sup 9} t of CO{sub 2} per year yields an enlargement of the earth's surface temperature of 2.3 .10{sup -2} C per year in agreement with other estimations. (orig.)

  12. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  13. Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S., E-mail: swathi@iisertvm.ac.in [School of Chemistry, Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), Kerala 695016 (India)

    2014-11-14

    The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 − 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 − 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.

  14. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  15. Analytical one-dimensional frequency response and stability model for PWR nuclear power plants

    International Nuclear Information System (INIS)

    Hoeld, A.

    1975-01-01

    A dynamic model for PWR nuclear power plants is presented. The plant is assumed to consist of one-dimensional single-channel core, a counterflow once-through steam generator (represented by two nodes according to the nonboiling and boiling region) and the necessary connection coolant lines. The model describes analytically the frequency response behaviour of important parameters of such a plant with respect to perturbations in reactivity, subcooling or mass flow (both at the entrances to the reactor core and/or the secondary steam generator side), the perturbations in steam load or system pressure (on the secondary side of the steam generator). From corresponding 'open' loop considerations it can then be concluded - by applying the Nyquist criterion - upon the degree of the stability behaviour of the underlying system. Based on this theoretical model, a computer code named ADYPMO has been established. From the knowledge of the frequency response behaviour of such a system, the corresponding transient behaviour with respect to a stepwise or any other perturbation signal can also be calculated by applying an appropriate retransformation method, e.g. by using digital code FRETI. To demonstrate this procedure, a transient experimental curve measured during the pre-operational test period at the PWR nuclear power plant KKS Stade was recalculated using the combination ADYPMO-FRETI. Good agreement between theoretical calculations and experimental results give an insight into the validity and efficiency of the underlying theoretical model and the applied retransformation method. (Auth.)

  16. Surveillance of emerging drugs of abuse in Hong Kong: validation of an analytical tool.

    Science.gov (United States)

    Tang, Magdalene H Y; Ching, C K; Tse, M L; Ng, Carol; Lee, Caroline; Chong, Y K; Wong, Watson; Mak, Tony W L

    2015-04-01

    To validate a locally developed chromatography-based method to monitor emerging drugs of abuse whilst performing regular drug testing in abusers. Cross-sectional study. Eleven regional hospitals, seven social service units, and a tertiary level clinical toxicology laboratory in Hong Kong. A total of 972 drug abusers and high-risk individuals were recruited from acute, rehabilitation, and high-risk settings between 1 November 2011 and 31 July 2013. A subset of the participants was of South Asian ethnicity. In total, 2000 urine or hair specimens were collected. Proof of concept that surveillance of emerging drugs of abuse can be performed whilst conducting routine drug of abuse testing in patients. The method was successfully applied to 2000 samples with three emerging drugs of abuse detected in five samples: PMMA (paramethoxymethamphetamine), TFMPP [1-(3-trifluoromethylphenyl)piperazine], and methcathinone. The method also detected conventional drugs of abuse, with codeine, methadone, heroin, methamphetamine, and ketamine being the most frequently detected drugs. Other findings included the observation that South Asians had significantly higher rates of using opiates such as heroin, methadone, and codeine; and that ketamine and cocaine had significantly higher detection rates in acute subjects compared with the rehabilitation population. This locally developed analytical method is a valid tool for simultaneous surveillance of emerging drugs of abuse and routine drug monitoring of patients at minimal additional cost and effort. Continued, proactive surveillance and early identification of emerging drugs will facilitate prompt clinical, social, and legislative management.

  17. Analytical model for double split ring resonators with arbitrary ring width

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Jensen, Thomas; Krozer, Viktor

    2008-01-01

    For the first time, the analytical model for a double split ring resonator with unequal width rings is developed. The proposed models for the resonators with equal and unequal widths are based on an impedance matrix representation and provide the prediction of performance in a wide frequency range...

  18. An accurate analytical solution of a zero-dimensional greenhouse model for global warming

    International Nuclear Information System (INIS)

    Foong, S K

    2006-01-01

    In introducing the complex subject of global warming, books and papers usually use the zero-dimensional greenhouse model. When the ratio of the infrared radiation energy of the Earth's surface that is lost to outer space to the non-reflected average solar radiation energy is small, the model admits an accurate approximate analytical solution-the resulting energy balance equation of the model is a quartic equation that can be solved analytically-and thus provides an alternative solution and instructional strategy. A search through the literature fails to find an analytical solution, suggesting that the solution may be new. In this paper, we review the model, derive the approximation and obtain its solution. The dependence of the temperature of the surface of the Earth and the temperature of the atmosphere on seven parameters is made explicit. A simple and convenient formula for global warming (or cooling) in terms of the percentage change of the parameters is derived. The dependence of the surface temperature on the parameters is illustrated by several representative graphs

  19. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  20. Development of a decision analytic model to support decision making and risk communication about thrombolytic treatment.

    Science.gov (United States)

    McMeekin, Peter; Flynn, Darren; Ford, Gary A; Rodgers, Helen; Gray, Jo; Thomson, Richard G

    2015-11-11

    Individualised prediction of outcomes can support clinical and shared decision making. This paper describes the building of such a model to predict outcomes with and without intravenous thrombolysis treatment following ischaemic stroke. A decision analytic model (DAM) was constructed to establish the likely balance of benefits and risks of treating acute ischaemic stroke with thrombolysis. Probability of independence, (modified Rankin score mRS ≤ 2), dependence (mRS 3 to 5) and death at three months post-stroke was based on a calibrated version of the Stroke-Thrombolytic Predictive Instrument using data from routinely treated stroke patients in the Safe Implementation of Treatments in Stroke (SITS-UK) registry. Predictions in untreated patients were validated using data from the Virtual International Stroke Trials Archive (VISTA). The probability of symptomatic intracerebral haemorrhage in treated patients was incorporated using a scoring model from Safe Implementation of Thrombolysis in Stroke-Monitoring Study (SITS-MOST) data. The model predicts probabilities of haemorrhage, death, independence and dependence at 3-months, with and without thrombolysis, as a function of 13 patient characteristics. Calibration (and inclusion of additional predictors) of the Stroke-Thrombolytic Predictive Instrument (S-TPI) addressed issues of under and over prediction. Validation with VISTA data confirmed that assumptions about treatment effect were just. The C-statistics for independence and death in treated patients in the DAM were 0.793 and 0.771 respectively, and 0.776 for independence in untreated patients from VISTA. We have produced a DAM that provides an estimation of the likely benefits and risks of thrombolysis for individual patients, which has subsequently been embedded in a computerised decision aid to support better decision-making and informed consent.