WorldWideScience

Sample records for standard analytical models

  1. Nanometrology, Standardization and Regulation of Nanomaterials in Brazil: A Proposal for an Analytical-Prospective Model

    Directory of Open Access Journals (Sweden)

    Ana Rusmerg Giménez Ledesma

    2013-05-01

    Full Text Available The main objective of this paper is to propose an analytical-prospective model as a tool to support decision-making processes concerning metrology, standardization and regulation of nanomaterials in Brazil, based on international references and ongoing initiatives in the world. In the context of nanotechnology development in Brazil, the motivation for carrying out this research was to identify potential benefits of metrology, standardization and regulation of nanomaterials production, from the perspective of future adoption of the model by the main stakeholders of development of these areas in Brazil. The main results can be summarized as follows: (i an overview of international studies on metrology, standardization and regulation of nanomaterials, and nanoparticles, in special; (ii the analytical-prospective model; and (iii the survey questionnaire and the roadmapping tool for metrology, standardization and regulation of nanomaterials in Brazil, based on international references and ongoing initiatives in the world.

  2. Reactor Section standard analytical methods. Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Sowden, D.

    1954-07-01

    the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.

  3. R2SM: a package for the analytic computation of the R{sub 2} Rational terms in the Standard Model of the Electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Garzelli, M.V. [INFN (Italy); Granada Univ. (Spain). Dept. de Fisica Teorica y del Cosmos y CAFPE; Malamos, I. [Radboud Universiteit Nijmegen, Department of Theoretical High Energy Physics, Institute for Mathematics, Astrophysics and Particle Physics, Nijmegen (Netherlands)

    2011-03-15

    The analytical package written in FORM presented in this paper allows the computation of the complete set of Feynman Rules producing the Rational terms of kind R{sub 2} contributing to the virtual part of NLO corrections in the Standard Model of the Electroweak interactions. Building block topologies filled by means of generic scalars, vectors and fermions, allowing to build these Feynman Rules in terms of specific elementary particles, are explicitly given in the R{sub {xi}} gauge class, together with the automatic dressing procedure to obtain the Feynman Rules from them. The results in more specific gauges, like the 't Hooft Feynman one, follow as particular cases, in both the HV and the FDH dimensional regularization schemes. As a check on our formulas, the gauge independence of the total Rational contribution (R{sub 1}+R{sub 2}) to renormalized S-matrix elements is verified by considering the specific example of the H {yields}{gamma}{gamma} decay process at 1-loop. This package can be of interest for people aiming at a better understanding of the nature of the Rational terms. It is organized in a modular way, allowing a further use of some its files even in different contexts. Furthermore, it can be considered as a first seed in the effort towards a complete automation of the process of the analytical calculation of the R{sub 2} effective vertices, given the Lagrangian of a generic gauge theory of particle interactions. (orig.)

  4. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  5. Establishment of a Standard Analytical Model of Distribution Network with Distributed Generators and Development of Multi Evaluation Method for Network Configuration Candidates

    Science.gov (United States)

    Hayashi, Yasuhiro; Kawasaki, Shoji; Matsuki, Junya; Matsuda, Hiroaki; Sakai, Shigekazu; Miyazaki, Teru; Kobayashi, Naoki

    Since a distribution network has many sectionalizing switches, there are huge radial network configuration candidates by states (opened or closed) of sectionalizing switches. Recently, the total number of distributed generation such as photovoltaic generation system and wind turbine generation system connected to the distribution network is drastically increased. The distribution network with the distributed generators must be operated keeping reliability of power supply and power quality. Therefore, the many configurations of the distribution network with the distributed generators must be evaluated multiply from various viewpoints such as distribution loss, total harmonic distortion, voltage imbalance and so on. In this paper, the authors propose a multi evaluation method to evaluate the distribution network configuration candidates satisfied with constraints of voltage and line current limit from three viewpoints ((1) distribution loss, (2) total harmonic distortion and (3) voltage imbalance). After establishing a standard analytical model of three sectionalized and three connected distribution network configuration with distributed generators based on the practical data, the multi evaluation for the established model is carried out by using the proposed method based on EMTP (Electro-Magnetic Transients Programs).

  6. Analytic Modeling of Insurgencies

    Science.gov (United States)

    2014-08-01

    literature , e.g., [9], [24]. Modeling public behavior takes into account the difference between the attitude of an individual (his “heart”) and his...vol. 10, pp. 818- 827, 1962. [4] G. McCormick, "The Shining Path and Peruvian terrorism," RAND, P-7297, Santa Monica, CA, 1987. [5] B. Connable, W

  7. Applying standards to systematize learning analytics in serious games

    NARCIS (Netherlands)

    Serrano-Laguna, Angel; Martinez-Ortiz, Ivan; Haag, Jason; Regan, Damon; Johnson, Andy; Fernandez-Manjon, Baltasar

    2016-01-01

    Learning Analytics is an emerging field focused on analyzing learners’ interactions with educational content. One of the key open issues in learning analytics is the standardization of the data collected. This is a particularly challenging issue in serious games, which generate a diverse range of

  8. Analytical standards for accountability of uranium hexafluoride - 1972

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    An analytical standard for the accountability of uranium hexafluoride is presented that includes procedures for subsampling, determination of uranium, determination of metallic impurities and isotopic analysis by gas and thermal ionization mass spectrometry

  9. Synergistic relationships between Analytical Chemistry and written standards

    International Nuclear Information System (INIS)

    Valcárcel, Miguel; Lucena, Rafael

    2013-01-01

    Graphical abstract: -- Highlights: •Analytical Chemistry is influenced by international written standards. •Different relationships can be established between them. •Synergies can be generated when these standards are conveniently managed. -- Abstract: This paper describes the mutual impact of Analytical Chemistry and several international written standards (norms and guides) related to knowledge management (CEN-CWA 14924:2004), social responsibility (ISO 26000:2010), management of occupational health and safety (OHSAS 18001/2), environmental management (ISO 14001:2004), quality management systems (ISO 9001:2008) and requirements of the competence of testing and calibration laboratories (ISO 17025:2004). The intensity of this impact, based on a two-way influence, is quite different depending on the standard considered. In any case, a new and fruitful approach to Analytical Chemistry based on these relationships can be derived

  10. Synergistic relationships between Analytical Chemistry and written standards

    Energy Technology Data Exchange (ETDEWEB)

    Valcárcel, Miguel, E-mail: qa1vacam@uco.es; Lucena, Rafael

    2013-07-25

    Graphical abstract: -- Highlights: •Analytical Chemistry is influenced by international written standards. •Different relationships can be established between them. •Synergies can be generated when these standards are conveniently managed. -- Abstract: This paper describes the mutual impact of Analytical Chemistry and several international written standards (norms and guides) related to knowledge management (CEN-CWA 14924:2004), social responsibility (ISO 26000:2010), management of occupational health and safety (OHSAS 18001/2), environmental management (ISO 14001:2004), quality management systems (ISO 9001:2008) and requirements of the competence of testing and calibration laboratories (ISO 17025:2004). The intensity of this impact, based on a two-way influence, is quite different depending on the standard considered. In any case, a new and fruitful approach to Analytical Chemistry based on these relationships can be derived.

  11. Beyond the standard model

    International Nuclear Information System (INIS)

    Wilczek, F.

    1993-01-01

    The standard model of particle physics is highly successful, although it is obviously not a complete or final theory. In this presentation the author argues that the structure of the standard model gives some quite concrete, compelling hints regarding what lies beyond. Essentially, this presentation is a record of the author's own judgement of what the central clues for physics beyond the standard model are, and also it is an attempt at some pedagogy. 14 refs., 6 figs

  12. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  13. The Standard Model course

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    Suggested Readings: Aspects of Quantum Chromodynamics/A Pich, arXiv:hep-ph/0001118. - The Standard Model of Electroweak Interactions/A Pich, arXiv:hep-ph/0502010. - The Standard Model of Particle Physics/A Pich The Standard Model of Elementary Particle Physics will be described. A detailed discussion of the particle content, structure and symmetries of the theory will be given, together with an overview of the most important experimental facts which have established this theoretical framework as the Standard Theory of particle interactions.

  14. Synthetic salt cake standards for analytical laboratory quality control

    International Nuclear Information System (INIS)

    Schilling, A.E.; Miller, A.G.

    1980-01-01

    The validation of analytical results in the characterization of Hanford Nuclear Defense Waste requires the preparation of synthetic waste for standard reference materials. Two independent synthetic salt cake standards have been prepared to monitor laboratory quality control for the chemical characterization of high-level salt cake and sludge waste in support of Rockwell Hanford Operations' High-Level Waste Management Program. Each synthetic salt cake standard contains 15 characterized chemical species and was subjected to an extensive verification/characterization program in two phases. Phase I consisted of an initial verification of each analyte in salt cake form in order to determine the current analytical capability for chemical analysis. Phase II consisted of a final characterization of those chemical species in solution form where conflicting verification data were observed. The 95 percent confidence interval on the mean for the following analytes within each standard is provided: sodium, nitrate, nitrite, phosphate, carbonate, sulfate, hydroxide, chromate, chloride, fluoride, aluminum, plutonium-239/240, strontium-90, cesium-137, and water

  15. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  16. Beyond the standard model

    International Nuclear Information System (INIS)

    Pleitez, V.

    1994-01-01

    The search for physics laws beyond the standard model is discussed in a general way, and also some topics on supersymmetry theories. An approach is made on recent possibilities rise in the leptonic sector. Finally, models with SU(3) c X SU(2) L X U(1) Y symmetry are considered as alternatives for the extensions of the elementary particles standard model. 36 refs., 1 fig., 4 tabs

  17. Beyond the standard model

    International Nuclear Information System (INIS)

    Gaillard, M.K.

    1990-04-01

    The unresolved issues of the standard model are reviewed, with emphasis on the gauge hierarchy problem. A possible mechanism for generating a hierarchy in the context of superstring theory is described. 24 refs

  18. Standardized processing of MALDI imaging raw data for enhancement of weak analyte signals in mouse models of gastric cancer and Alzheimer's disease.

    Science.gov (United States)

    Schwartz, Matthias; Meyer, Björn; Wirnitzer, Bernhard; Hopf, Carsten

    2015-03-01

    Conventional mass spectrometry image preprocessing methods used for denoising, such as the Savitzky-Golay smoothing or discrete wavelet transformation, typically do not only remove noise but also weak signals. Recently, memory-efficient principal component analysis (PCA) in conjunction with random projections (RP) has been proposed for reversible compression and analysis of large mass spectrometry imaging datasets. It considers single-pixel spectra in their local context and consequently offers the prospect of using information from the spectra of adjacent pixels for denoising or signal enhancement. However, little systematic analysis of key RP-PCA parameters has been reported so far, and the utility and validity of this method for context-dependent enhancement of known medically or pharmacologically relevant weak analyte signals in linear-mode matrix-assisted laser desorption/ionization (MALDI) mass spectra has not been explored yet. Here, we investigate MALDI imaging datasets from mouse models of Alzheimer's disease and gastric cancer to systematically assess the importance of selecting the right number of random projections k and of principal components (PCs) L for reconstructing reproducibly denoised images after compression. We provide detailed quantitative data for comparison of RP-PCA-denoising with the Savitzky-Golay and wavelet-based denoising in these mouse models as a resource for the mass spectrometry imaging community. Most importantly, we demonstrate that RP-PCA preprocessing can enhance signals of low-intensity amyloid-β peptide isoforms such as Aβ1-26 even in sparsely distributed Alzheimer's β-amyloid plaques and that it enables enhanced imaging of multiply acetylated histone H4 isoforms in response to pharmacological histone deacetylase inhibition in vivo. We conclude that RP-PCA denoising may be a useful preprocessing step in biomarker discovery workflows.

  19. Synergistic relationships between Analytical Chemistry and written standards.

    Science.gov (United States)

    Valcárcel, Miguel; Lucena, Rafael

    2013-07-25

    This paper describes the mutual impact of Analytical Chemistry and several international written standards (norms and guides) related to knowledge management (CEN-CWA 14924:2004), social responsibility (ISO 26000:2010), management of occupational health and safety (OHSAS 18001/2), environmental management (ISO 14001:2004), quality management systems (ISO 9001:2008) and requirements of the competence of testing and calibration laboratories (ISO 17025:2004). The intensity of this impact, based on a two-way influence, is quite different depending on the standard considered. In any case, a new and fruitful approach to Analytical Chemistry based on these relationships can be derived. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  1. Beyond the standard model

    International Nuclear Information System (INIS)

    Cuypers, F.

    1997-05-01

    These lecture notes are intended as a pedagogical introduction to several popular extensions of the standard model of strong and electroweak interactions. The topics include the Higgs sector, the left-right symmetric model, grand unification and supersymmetry. Phenomenological consequences and search procedures are emphasized. (author) figs., tabs., 18 refs

  2. Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Peskin, M.E.

    1997-05-01

    These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.

  3. Beyond the Standard Model

    International Nuclear Information System (INIS)

    Peskin, M.E.

    1997-05-01

    These lectures constitute a short course in ''Beyond the Standard Model'' for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e + e - colliders

  4. Conference: STANDARD MODEL @ LHC

    CERN Multimedia

    2012-01-01

    HCØ institute Universitetsparken 5 DK-2100 Copenhagen Ø Denmark Room: Auditorium 2 STANDARD MODEL @ LHC Niels Bohr International Academy and Discovery Center 10-13 April 2012 This four day meeting will bring together both experimental and theoretical aspects of Standard Model phenomenology at the LHC. The very latest results from the LHC experiments will be under discussion. Topics covered will be split into the following categories:     * QCD (Hard,Soft & PDFs)     * Vector Boson production     * Higgs searches     * Top Quark Physics     * Flavour physics

  5. The Standard Model

    Science.gov (United States)

    Burgess, Cliff; Moore, Guy

    2012-04-01

    List of illustrations; List of tables; Preface; Acknowledgments; Part I. Theoretical Framework: 1. Field theory review; 2. The standard model: general features; 3. Cross sections and lifetimes; Part II. Applications: Leptons: 4. Elementary boson decays; 5. Leptonic weak interactions: decays; 6. Leptonic weak interactions: collisions; 7. Effective Lagrangians; Part III. Applications: Hadrons: 8. Hadrons and QCD; 9. Hadronic interactions; Part IV. Beyond the Standard Model: 10. Neutrino masses; 11. Open questions, proposed solutions; Appendix A. Experimental values for the parameters; Appendix B. Symmetries and group theory review; Appendix C. Lorentz group and the Dirac algebra; Appendix D. ξ-gauge Feynman rules; Appendix E. Metric convention conversion table; Select bibliography; Index.

  6. Beyond the Standard Model

    CERN Document Server

    Csáki, Csaba

    2015-01-01

    We introduce aspects of physics beyond the Standard Model focusing on supersymmetry, extra dimensions, and a composite Higgs as solutions to the Hierarchy problem. Lectures given at the 2013 European School of High Energy Physics, Parádfürdo, Hungary, 5-18 June 2013.

  7. Beyond the Standard Model

    CERN Multimedia

    CERN. Geneva

    2005-01-01

    The necessity for new physics beyond the Standard Model will be motivated. Theoretical problems will be exposed and possible solutions will be described. The goal is to present the exciting new physics ideas that will be tested in the near future. Supersymmetry, grand unification, extra dimensions and string theory will be presented.

  8. Solving signal instability to maintain the second-order advantage in the resolution and determination of multi-analytes in complex systems by modeling liquid chromatography-mass spectrometry data using alternating trilinear decomposition method assisted with piecewise direct standardization.

    Science.gov (United States)

    Gu, Hui-Wen; Wu, Hai-Long; Yin, Xiao-Li; Li, Shan-Shan; Liu, Ya-Juan; Xia, Hui; Xie, Li-Xia; Yu, Ru-Qin; Yang, Peng-Yuan; Lu, Hao-Jie

    2015-08-14

    The application of calibration transfer methods has been successful in combination with near-infrared spectroscopy or other tools for prediction of chemical composition. One of the developed methods that can provide accurate performances is the piecewise direct standardization (PDS) method, which in this paper is firstly applied to transfer from one day to another the second-order calibration model based on alternating trilinear decomposition (ATLD) method built for the interference-free resolution and determination of multi-analytes in complex systems by liquid chromatography-mass spectrometry (LC-MS) in full scan mode. This is an example of LC-MS analysis in which interferences have been found, making necessary the use of second-order calibration because of its capacity for modeling this phenomenon, which implies analytes of interest can be resolved and quantified even in the presence of overlapped peaks and unknown interferences. Once the second-order calibration model based on ATLD method was built, the calibration transfer was conducted to compensate for the signal instability of LC-MS instrument over time. This allows one to reduce the volume of the heavy works for complete recalibration which is necessary for later accurate determinations. The root-mean-square error of prediction (RMSEP) and average recovery were used to evaluate the performances of the proposed strategy. Results showed that the number of calibration samples used on the real LC-MS data was reduced by using the PDS method from 11 to 3 while producing comparable RMSEP values and recovery values that were statistically the same (F-test, 95% confidence level) to those obtained with 11 calibration samples. This methodology is in accordance with the highly recommended green analytical chemistry principles, since it can reduce the experimental efforts and cost with regard to the use of a new calibration model built in modified conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Lykken, Joseph D.; /Fermilab

    2010-05-01

    'BSM physics' is a phrase used in several ways. It can refer to physical phenomena established experimentally but not accommodated by the Standard Model, in particular dark matter and neutrino oscillations (technically also anything that has to do with gravity, since gravity is not part of the Standard Model). 'Beyond the Standard Model' can also refer to possible deeper explanations of phenomena that are accommodated by the Standard Model but only with ad hoc parameterizations, such as Yukawa couplings and the strong CP angle. More generally, BSM can be taken to refer to any possible extension of the Standard Model, whether or not the extension solves any particular set of puzzles left unresolved in the SM. In this general sense one sees reference to the BSM 'theory space' of all possible SM extensions, this being a parameter space of coupling constants for new interactions, new charges or other quantum numbers, and parameters describing possible new degrees of freedom or new symmetries. Despite decades of model-building it seems unlikely that we have mapped out most of, or even the most interesting parts of, this theory space. Indeed we do not even know what is the dimensionality of this parameter space, or what fraction of it is already ruled out by experiment. Since Nature is only implementing at most one point in this BSM theory space (at least in our neighborhood of space and time), it might seem an impossible task to map back from a finite number of experimental discoveries and measurements to a unique BSM explanation. Fortunately for theorists the inevitable limitations of experiments themselves, in terms of resolutions, rates, and energy scales, means that in practice there are only a finite number of BSM model 'equivalence classes' competing at any given time to explain any given set of results. BSM phenomenology is a two-way street: not only do experimental results test or constrain BSM models, they also suggest

  10. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  11. Standard Model physics

    CERN Multimedia

    Altarelli, Guido

    1999-01-01

    Introduction structure of gauge theories. The QEDand QCD examples. Chiral theories. The electroweak theory. Spontaneous symmetry breaking. The Higgs mechanism Gauge boson and fermion masses. Yukawa coupling. Charges current couplings. The Cabibo-Kobayashi-Maskawa matrix and CP violation. Neutral current couplings. The Glasow-Iliopoulos-Maiani mechanism. Gauge boson and Higgs coupling. Radiative corrections and loops. Cancellation of the chiral anomaly. Limits on the Higgs comparaison. Problems of the Standard Model. Outlook.

  12. Standard model and beyond

    International Nuclear Information System (INIS)

    Quigg, C.

    1984-09-01

    The SU(3)/sub c/ circle crossSU(2)/sub L/circle crossU(1)/sub Y/ gauge theory of ineractions among quarks and leptons is briefly described, and some recent notable successes of the theory are mentioned. Some shortcomings in our ability to apply the theory are noted, and the incompleteness of the standard model is exhibited. Experimental hints that Nature may be richer in structure than the minimal theory are discussed. 23 references

  13. Analytical Model for Sensor Placement on Microprocessors

    National Research Council Canada - National Science Library

    Lee, Kyeong-Jae; Skadron, Kevin; Huang, Wei

    2005-01-01

    .... In this paper, we present an analytical model that describes the maximum temperature differential between a hot spot and a region of interest based on their distance and processor packaging information...

  14. Analytical model of internally coupled ears

    DEFF Research Database (Denmark)

    Vossen, Christine; Christensen-Dalsgaard, Jakob; Leo van Hemmen, J

    2010-01-01

    differences in the tympanic membrane vibrations. Both cues show strong directionality. The work presented herein sets out the derivation of a three dimensional analytical model of internally coupled ears that allows for calculation of a complete vibration profile of the membranes. The analytical model...... additionally provides the opportunity to incorporate the effect of the asymmetrically attached columella, which leads to the activation of higher membrane vibration modes. Incorporating this effect, the analytical model can explain measurements taken from the tympanic membrane of a living lizard, for example......, data demonstrating an asymmetrical spatial pattern of membrane vibration. As the analytical calculations show, the internally coupled ears increase the directional response, appearing in large directional internal amplitude differences (iAD) and in large internal time differences (iTD). Numerical...

  15. Analytical modelling of soccer heading

    Indian Academy of Sciences (India)

    Heading occur frequently in soccer games and studies have shown that repetitive heading of the soccer ball could result in degeneration of brain cells and lead to mild traumatic brain injury. This study proposes a two degree-of-freedom linear mathematical model to study the impact of the soccer ball on the brain. The model ...

  16. Analytic nearest neighbour model for FCC metals

    International Nuclear Information System (INIS)

    Idiodi, J.O.A.; Garba, E.J.D.; Akinlade, O.

    1991-06-01

    A recently proposed analytic nearest-neighbour model for fcc metals is criticised and two alternative nearest-neighbour models derived from the separable potential method (SPM) are recommended. Results for copper and aluminium illustrate the utility of the recommended models. (author). 20 refs, 5 tabs

  17. Quasi standard model physics

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1986-01-01

    Possible small extensions of the standard model are considered, which are motivated by the strong CP problem and by the baryon asymmetry of the Universe. Phenomenological arguments are given which suggest that imposing a PQ symmetry to solve the strong CP problem is only tenable if the scale of the PQ breakdown is much above M W . Furthermore, an attempt is made to connect the scale of the PQ breakdown to that of the breakdown of lepton number. It is argued that in these theories the same intermediate scale may be responsible for the baryon number of the Universe, provided the Kuzmin Rubakov Shaposhnikov (B+L) erasing mechanism is operative. (orig.)

  18. Standard-model bundles

    CERN Document Server

    Donagi, Ron; Pantev, Tony; Waldram, Dan; Donagi, Ron; Ovrut, Burt; Pantev, Tony; Waldram, Dan

    2002-01-01

    We describe a family of genus one fibered Calabi-Yau threefolds with fundamental group ${\\mathbb Z}/2$. On each Calabi-Yau $Z$ in the family we exhibit a positive dimensional family of Mumford stable bundles whose symmetry group is the Standard Model group $SU(3)\\times SU(2)\\times U(1)$ and which have $c_{3} = 6$. We also show that for each bundle $V$ in our family, $c_{2}(Z) - c_{2}(V)$ is the class of an effective curve on $Z$. These conditions ensure that $Z$ and $V$ can be used for a phenomenologically relevant compactification of Heterotic M-theory.

  19. Analytical eigenstates for the quantum Rabi model

    International Nuclear Information System (INIS)

    Zhong, Honghua; Xie, Qiongtao; Lee, Chaohong; Batchelor, Murray T

    2013-01-01

    We develop a method to find analytical solutions for the eigenstates of the quantum Rabi model. These include symmetric, anti-symmetric and asymmetric analytic solutions given in terms of the confluent Heun functions. Both regular and exceptional solutions are given in a unified form. In addition, the analytic conditions for determining the energy spectrum are obtained. Our results show that conditions proposed by Braak (2011 Phys. Rev. Lett. 107 100401) are a type of sufficiency condition for determining the regular solutions. The well-known Judd isolated exact solutions appear naturally as truncations of the confluent Heun functions. (paper)

  20. The standard model

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1994-03-01

    In these lectures, my aim is to provide a survey of the standard model with emphasis on its renormalizability and electroweak radiative corrections. Since this is a school, I will try to be somewhat pedagogical by providing examples of loop calculations. In that way, I hope to illustrate some of the commonly employed tools of particle physics. With those goals in mind, I have organized my presentations as follows: In Section 2, renormalization is discussed from an applied perspective. The technique of dimensional regularization is described and used to define running couplings and masses. The utility of the renormalization group for computing leading logs is illustrated for the muon anomalous magnetic moment. In Section 3 electroweak radiative corrections are discussed. Standard model predictions are surveyed and used to constrain the top quark mass. The S, T, and U parameters are introduced and employed to probe for ''new physics''. The effect of Z' bosons on low energy phenomenology is described. In Section 4, a detailed illustration of electroweak radiative corrections is given for atomic parity violation. Finally, in Section 5, I conclude with an outlook for the future

  1. Feedbacks Between Numerical and Analytical Models in Hydrogeology

    Science.gov (United States)

    Zlotnik, V. A.; Cardenas, M. B.; Toundykov, D.; Cohn, S.

    2012-12-01

    Hydrogeology is a relatively young discipline which combines elements of Earth science and engineering. Mature fundamental disciplines (e.g., physics, chemistry, fluid mechanics) have centuries-long history of mathematical modeling even prior to discovery of Darcy's law. Thus, in hydrogeology, relatively few classic analytical models (such those by Theis, Polubarinova-Kochina, Philip, Toth, Henry, Dagan, Neuman) were developed by the early 1970's. The advent of computers and practical demands refocused mathematical models towards numerical techniques. With more diverse but less mathematically-oriented training, most hydrogeologists shifted from analytical methods to use of standardized computational software. Spatial variability in internal properties and external boundary conditions and geometry, and the added complexity of chemical and biological processes will remain major challenges for analytical modeling. Possibly, analytical techniques will play a subordinate role to numerical approaches in many applications. On the other hand, the rise of analytical element modeling of groundwater flow is a strong alternative to numerical models when data demand and computational efficiency is considered. The hallmark of analytical models - transparency and accuracy - will remain indispensable for scientific exploration of complex phenomena and for benchmarking numerical models. Therefore, there will always be feedbacks and complementarities between numerical and analytical techniques, as well as a certain ideological schism among various views to modeling. We illustrate the idea of feedbacks by reviewing evolution of Joszef Toth's analytical model of gravity driven flow systems. Toth's (1963) approach was to reduce the flow domain to a rectangle which allowed for closed-form solution of the governing equations. Succeeding numerical finite-element models by Freeze and Witherspoon (1966-1968) explored the effects of geometry and heterogeneity on regional groundwater flow

  2. An analytical model of flagellate hydrodynamics

    DEFF Research Database (Denmark)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders Peter

    2017-01-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical...

  3. Semi-analytic modelling of subsidence

    NARCIS (Netherlands)

    Fokker, P.A.; Orlic, B.

    2006-01-01

    This paper presents a forward model for subsidence prediction caused by extraction of hydrocarbons. The model uses combinations of analytic solutions to the visco-elastic equations, which approximate the boundary conditions. There are only a few unknown parameters to be estimated, and, consequently,

  4. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  5. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  6. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  7. Structure of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Langacker, Paul [Pennsylvania Univ., PA (United States). Dept. of Physics

    1996-07-01

    This lecture presents the structure of the standard model, approaching the following aspects: the standard model Lagrangian, spontaneous symmetry breaking, gauge interactions, covering charged currents, quantum electrodynamics, the neutral current and gauge self-interactions, and problems with the standard model, such as gauge, fermion, Higgs and hierarchy, strong C P and graviton problems.

  8. Modeling and Analytical Simulation of a Smouldering ...

    African Journals Online (AJOL)

    ADOWIE PERE

    ABSTRACT: Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. ..... eAt v g. RT. E gp φρ ρ σ. −. = Analytical Solution. We solve equations (10) – (14) using parameter- expanding method (where details can be found in. (He, 2006)) and ...

  9. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  10. Analytical model for Stirling cycle machine design

    International Nuclear Information System (INIS)

    Formosa, F.; Despesse, G.

    2010-01-01

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined.

  11. MATLAB/Simulink analytic radar modeling environment

    Science.gov (United States)

    Esken, Bruce L.; Clayton, Brian L.

    2001-09-01

    Analytic radar models are simulations based on abstract representations of the radar, the RF environment that radar signals are propagated, and the reflections produced by targets, clutter and multipath. These models have traditionally been developed in FORTRAN and have evolved over the last 20 years into efficient and well-accepted codes. However, current models are limited in two primary areas. First, by the nature of algorithm based analytical models, they can be difficult to understand by non-programmers and equally difficult to modify or extend. Second, there is strong interest in re-using these models to support higher-level weapon system and mission level simulations. To address these issues, a model development approach has been demonstrated which utilizes the MATLAB/Simulink graphical development environment. Because the MATLAB/Simulink environment graphically represents model algorithms - thus providing visibility into the model - algorithms can be easily analyzed and modified by engineers and analysts with limited software skills. In addition, software tools have been created that provide for the automatic code generation of C++ objects. These objects are created with well-defined interfaces enabling them to be used by modeling architectures external to the MATLAB/Simulink environment. The approach utilized is generic and can be extended to other engineering fields.

  12. SIMMER-III analytic thermophysical property model

    International Nuclear Information System (INIS)

    Morita, K; Tobita, Y.; Kondo, Sa.; Fischer, E.A.

    1999-05-01

    An analytic thermophysical property model using general function forms is developed for a reactor safety analysis code, SIMMER-III. The function forms are designed to represent correct behavior of properties of reactor-core materials over wide temperature ranges, especially for the thermal conductivity and the viscosity near the critical point. The most up-to-date and reliable sources for uranium dioxide, mixed-oxide fuel, stainless steel, and sodium available at present are used to determine parameters in the proposed functions. This model is also designed to be consistent with a SIMMER-III model on thermodynamic properties and equations of state for reactor-core materials. (author)

  13. New analytic results for speciation times in neutral models.

    Science.gov (United States)

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  14. Organizational Models for Big Data and Analytics

    Directory of Open Access Journals (Sweden)

    Robert L. Grossman

    2014-04-01

    Full Text Available In this article, we introduce a framework for determining how analytics capability should be distributed within an organization. Our framework stresses the importance of building a critical mass of analytics staff, centralizing or decentralizing the analytics staff to support business processes, and establishing an analytics governance structure to ensure that analytics processes are supported by the organization as a whole.

  15. Analytical modeling of parametrically modulated transmon qubits

    Science.gov (United States)

    Didier, Nicolas; Sete, Eyob A.; da Silva, Marcus P.; Rigetti, Chad

    2018-02-01

    Building a scalable quantum computer requires developing appropriate models to understand and verify its complex quantum dynamics. We focus on superconducting quantum processors based on transmons for which full numerical simulations are already challenging at the level of qubytes. It is thus highly desirable to develop accurate methods of modeling qubit networks that do not rely solely on numerical computations. Using systematic perturbation theory to large orders in the transmon regime, we derive precise analytic expressions of the transmon parameters. We apply our results to the case of parametrically modulated transmons to study recently implemented, parametrically activated entangling gates.

  16. An analytical model for interactive failures

    International Nuclear Information System (INIS)

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  17. Analytical model of the optical vortex microscope.

    Science.gov (United States)

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  18. Building analytical three-field cosmological models

    Science.gov (United States)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  19. Beyond Standard Model Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bellantoni, L.

    2009-11-01

    There are many recent results from searches for fundamental new physics using the TeVatron, the SLAC b-factory and HERA. This talk quickly reviewed searches for pair-produced stop, for gauge-mediated SUSY breaking, for Higgs bosons in the MSSM and NMSSM models, for leptoquarks, and v-hadrons. There is a SUSY model which accommodates the recent astrophysical experimental results that suggest that dark matter annihilation is occurring in the center of our galaxy, and a relevant experimental result. Finally, model-independent searches at D0, CDF, and H1 are discussed.

  20. Chapter 1: Standard Model processes

    OpenAIRE

    Becher, Thomas

    2017-01-01

    This chapter documents the production rates and typical distributions for a number of benchmark Standard Model processes, and discusses new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  1. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  2. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  3. Extended Standard Hough Transform for Analytical Line Recognition

    OpenAIRE

    Abdoulaye SERE; Oumarou SIE; Eric ANDRES

    2013-01-01

    This paper presents a new method which extends the Standard Hough Transform for the recognition of naive or standard line in a noisy picture. The proposed idea conserves the power of the Standard Hough Transform particularly a limited size of the parameter space and the recognition of vertical lines. The dual of a segment, and the dual of a pixel have been proposed to lead to a new definition of the preimage. Many alternatives of approximation could be established for the sinusoid curves of t...

  4. Preparation of standard hair material and development of analytical methodology

    International Nuclear Information System (INIS)

    Gangadharan, S.; Walvekar, A.P.; Ali, M.M.; Thantry, S.S.; Verma, R.; Devi, R.

    1995-01-01

    The concept of the use of human scalp hair as a first level indicator of exposure to inorganic pollutants has been established by us earlier. Efforts towards the preparation of a hair reference material are described. The analytical approaches for the determination of total mercury by cold vapour AAS and INAA and of methylmercury by extraction combined with gas chromatography coupled to an ECD are summarized with results on some of the samples analyzed, including the stability of values over a period of time of storage. (author)

  5. Physics beyond the Standard Model

    CERN Document Server

    Valle, José W F

    1991-01-01

    We discuss some of the signatures associated with extensions of the Standard Model related to the neutrino and electroweak symmetry breaking sectors, with and without supersymmetry. The topics include a basic discussion of the theory of neutrino mass and the corresponding extensions of the Standard Model that incorporate massive neutrinos; an overview of the present observational status of neutrino mass searches, with emphasis on solar neutrinos, as well the as cosmological data on the amplitude of primordial density fluctuations; the implications of neutrino mass in cosmological nucleosynthesis, non-accelerator, as well as in high energy particle collider experiments. Turning to the electroweak breaking sector, we discuss the physics potential for Higgs boson searches at LEP200, including Majoron extensions of the Standard Model, and the physics of invisibly decaying Higgs bosons. We discuss the minimal supersymmetric Standard Model phenomenology, as well as some of the laboratory signatures that would be as...

  6. Physics Beyond the Standard Model

    CERN Document Server

    Ellis, John

    2009-01-01

    The Standard Model is in good shape, apart possibly from g_\\mu - 2 and some niggling doubts about the electroweak data. Something like a Higgs boson is required to provide particle masses, but theorists are actively considering alternatives. The problems of flavour, unification and quantum gravity will require physics beyond the Standard Model, and astrophysics and cosmology also provide reasons to expect physics beyond the Standard Model, in particular to provide the dark matter and explain the origin of the matter in the Universe. Personally, I find supersymmetry to be the most attractive option for new physics at the TeV scale. The LHC should establish the origin of particle masses has good prospects for discovering dark matter, and might also cast light on unification and even quantum gravity. Important roles may also be played by lower-energy experiments, astrophysics and cosmology in the searches for new physics beyond the Standard Model.

  7. Beyond the standard model; Au-dela du modele standard

    Energy Technology Data Exchange (ETDEWEB)

    Cuypers, F. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-05-01

    These lecture notes are intended as a pedagogical introduction to several popular extensions of the standard model of strong and electroweak interactions. The topics include the Higgs sector, the left-right symmetric model, grand unification and supersymmetry. Phenomenological consequences and search procedures are emphasized. (author) figs., tabs., 18 refs.

  8. Standardizing Physiologic Assessment Data to Enable Big Data Analytics.

    Science.gov (United States)

    Matney, Susan A; Settergren, Theresa Tess; Carrington, Jane M; Richesson, Rachel L; Sheide, Amy; Westra, Bonnie L

    2016-07-18

    Disparate data must be represented in a common format to enable comparison across multiple institutions and facilitate big data science. Nursing assessments represent a rich source of information. However, a lack of agreement regarding essential concepts and standardized terminology prevent their use for big data science in the current state. The purpose of this study was to align a minimum set of physiological nursing assessment data elements with national standardized coding systems. Six institutions shared their 100 most common electronic health record nursing assessment data elements. From these, a set of distinct elements was mapped to nationally recognized Logical Observations Identifiers Names and Codes (LOINC®) and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT®) standards. We identified 137 observation names (55% new to LOINC), and 348 observation values (20% new to SNOMED CT) organized into 16 panels (72% new LOINC). This reference set can support the exchange of nursing information, facilitate multi-site research, and provide a framework for nursing data analysis. © The Author(s) 2016.

  9. Approximate analytical modeling of leptospirosis infection

    Science.gov (United States)

    Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani

    2017-11-01

    Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.

  10. Simple Analytic Models of Gravitational Collapse

    Energy Technology Data Exchange (ETDEWEB)

    Adler, R.

    2005-02-09

    Most general relativity textbooks devote considerable space to the simplest example of a black hole containing a singularity, the Schwarzschild geometry. However only a few discuss the dynamical process of gravitational collapse, by which black holes and singularities form. We present here two types of analytic models for this process, which we believe are the simplest available; the first involves collapsing spherical shells of light, analyzed mainly in Eddington-Finkelstein coordinates; the second involves collapsing spheres filled with a perfect fluid, analyzed mainly in Painleve-Gullstrand coordinates. Our main goal is pedagogical simplicity and algebraic completeness, but we also present some results that we believe are new, such as the collapse of a light shell in Kruskal-Szekeres coordinates.

  11. Graph and Analytical Models for Emergency Evacuation

    Directory of Open Access Journals (Sweden)

    Erol Gelenbe

    2013-02-01

    Full Text Available Cyber-Physical-Human Systems (CPHS combine sensing, communication and control to obtain desirable outcomes in physical environments for human beings, such as buildings or vehicles. A particularly important application area is emergency management. While recent work on the design and optimisation of emergency management schemes has relied essentially on discrete event simulation, which is challenged by the substantial amount of programming or reprogramming of the simulation tools and by the scalability and the computing time needed to obtain useful performance estimates, this paper proposes an approach that offers fast estimates based on graph models and probability models. We show that graph models can offer insight into the critical areas in an emergency evacuation and that they can suggest locations where sensor systems are particularly important and may require hardening. On the other hand, we also show that analytical models based on queueing theory can provide useful estimates of evacuation times and for routing optimisation. The results are illustrated with regard to the evacuation of a real-life building.

  12. An analytical model of flagellate hydrodynamics

    International Nuclear Information System (INIS)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders

    2017-01-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming trajectory. We find that the longitudinal flagellum is responsible for the average translational motion whereas the transversal flagellum governs the rotational motion. Finally, we show that the transversal flagellum can lead to strong feeding currents to localized capture sites on the cell surface. (paper)

  13. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    Science.gov (United States)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  14. About the standard solar model

    International Nuclear Information System (INIS)

    Cahen, S.

    1986-07-01

    A discussion of the still controversial solar helium content is presented, based on a comparison of recent standard solar models. Our last model yields an helium mass fraction ∼0.276, 6.4 SNU on 37 Cl and 126 SNU on 71 Ga

  15. The standard model and colliders

    International Nuclear Information System (INIS)

    Hinchliffe, I.

    1987-03-01

    Some topics in the standard model of strong and electroweak interactions are discussed, as well as how these topics are relevant for the high energy colliders which will become operational in the next few years. The radiative corrections in the Glashow-Weinberg-Salam model are discussed, stressing how these corrections may be measured at LEP and the SLC. CP violation is discussed briefly, followed by a discussion of the Higgs boson and the searches which are relevant to hadron colliders are then discussed. Some of the problems which the standard model does not solve are discussed, and the energy ranges accessible to the new colliders are indicated

  16. Dynamics of the standard model

    CERN Document Server

    Donoghue, John F; Holstein, Barry R

    2014-01-01

    Describing the fundamental theory of particle physics and its applications, this book provides a detailed account of the Standard Model, focusing on techniques that can produce information about real observed phenomena. The book begins with a pedagogic account of the Standard Model, introducing essential techniques such as effective field theory and path integral methods. It then focuses on the use of the Standard Model in the calculation of physical properties of particles. Rigorous methods are emphasized, but other useful models are also described. This second edition has been updated to include recent theoretical and experimental advances, such as the discovery of the Higgs boson. A new chapter is devoted to the theoretical and experimental understanding of neutrinos, and major advances in CP violation and electroweak physics have been given a modern treatment. This book is valuable to graduate students and researchers in particle physics, nuclear physics and related fields.

  17. From basic survival analytic theory to a non-standard application

    CERN Document Server

    Zimmermann, Georg

    2017-01-01

    Georg Zimmermann provides a mathematically rigorous treatment of basic survival analytic methods. His emphasis is also placed on various questions and problems, especially with regard to life expectancy calculations arising from a particular real-life dataset on patients with epilepsy. The author shows both the step-by-step analyses of that dataset and the theory the analyses are based on. He demonstrates that one may face serious and sometimes unexpected problems, even when conducting very basic analyses. Moreover, the reader learns that a practically relevant research question may look rather simple at first sight. Nevertheless, compared to standard textbooks, a more detailed account of the theory underlying life expectancy calculations is needed in order to provide a mathematically rigorous framework. Contents Regression Models for Survival Data Model Checking Procedures Life Expectancy Target Groups Researchers, lecturers, and students in the fields of mathematics and statistics Academics and experts work...

  18. The standard model and beyond

    CERN Document Server

    Langacker, Paul

    2017-01-01

    This new edition of The Standard Model and Beyond presents an advanced introduction to the physics and formalism of the standard model and other non-abelian gauge theories. It provides a solid background for understanding supersymmetry, string theory, extra dimensions, dynamical symmetry breaking, and cosmology. In addition to updating all of the experimental and phenomenological results from the first edition, it contains a new chapter on collider physics; expanded discussions of Higgs, neutrino, and dark matter physics; and many new problems. The book first reviews calculational techniques in field theory and the status of quantum electrodynamics. It then focuses on global and local symmetries and the construction of non-abelian gauge theories. The structure and tests of quantum chromodynamics, collider physics, the electroweak interactions and theory, and the physics of neutrino mass and mixing are thoroughly explored. The final chapter discusses the motivations for extending the standard model and examin...

  19. Standard model of knowledge representation

    Science.gov (United States)

    Yin, Wensheng

    2016-09-01

    Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.

  20. Extensions of the Standard Model

    CERN Document Server

    Zwirner, Fabio

    1996-01-01

    Rapporteur talk at the International Europhysics Conference on High Energy Physics, Brussels (Belgium), July 27-August 2, 1995. This talk begins with a brief general introduction to the extensions of the Standard Model, reviewing the ideology of effective field theories and its practical implications. The central part deals with candidate extensions near the Fermi scale, focusing on some phenomenological aspects of the Minimal Supersymmetric Standard Model. The final part discusses some possible low-energy implications of further extensions near the Planck scale, namely superstring theories.

  1. Physics beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Valle, J.W.F. [Valencia Univ. (Spain). Dept. de Fisica Teorica]. E-mail: valle@flamenco.uv.es

    1996-07-01

    We discuss some of the signatures associated with extensions of the Standard Model related to the neutrino and electroweak symmetry breaking sectors, with and without supersymmetry. The topics include a basic discussion of the theory of neutrino mass and the corresponding extensions of the Standard Model that incorporate massive neutrinos; an overview of the present observational status of neutrino mass searches, with emphasis on solar neutrinos, as well as cosmological data on the amplitude of primordial density fluctuations; the implications of neutrino mass in cosmological nucleosynthesis, non-accelerator, as well as in high energy particle collider experiments. Turning to the electroweak breaking sector, we discuss the physics potential for Higgs boson searches at LEP200, including Majorana extensions of the Standard Model, and the physics of invisibly decaying Higgs bosons. We discuss the minimal supersymmetric Standard Model phenomenology, as well as some of the laboratory signatures that would be associated to models with R parity violation, especially in Z and scalar boson decays. (author)

  2. Analytic modeling of axisymmetric disruption halo currents

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Kellman, A.G.

    1999-01-01

    Currents which can flow in plasma facing components during disruptions pose a challenge to the design of next generation tokamaks. Induced toroidal eddy currents and both induced and conducted poloidal ''halo'' currents can produce design-limiting electromagnetic loads. While induction of toroidal and poloidal currents in passive structures is a well-understood phenomenon, the driving terms and scalings for poloidal currents flowing on open field lines during disruptions are less well established. A model of halo current evolution is presented in which the current is induced in the halo by decay of the plasma current and change in enclosed toroidal flux while being convected into the halo from the core by plasma motion. Fundamental physical processes and scalings are described in a simplified analytic version of the model. The peak axisymmetric halo current is found to depend on halo and core plasma characteristics during the current quench, including machine and plasma dimensions, resistivities, safety factor, and vertical stability growth rate. Two extreme regimes in poloidal halo current amplitude are identified depending on the minimum halo safety factor reached during the disruption. A 'type I' disruption is characterized by a minimum safety factor that remains relatively high (typically 2 - 3, comparable to the predisruption safety factor), and a relatively low poloidal halo current. A 'type II' disruption is characterized by a minimum safety factor comparable to unity and a relatively high poloidal halo current. Model predictions for these two regimes are found to agree well with halo current measurements from vertical displacement event disruptions in DIII-D [T. S. Taylor, K. H. Burrell, D. R. Baker, G. L. Jackson, R. J. La Haye, M. A. Mahdavi, R. Prater, T. C. Simonen, and A. D. Turnbull, open-quotes Results from the DIII-D Scientific Research Program,close quotes in Proceedings of the 17th IAEA Fusion Energy Conference, Yokohama, 1998, to be published in

  3. Custom v. Standardized Risk Models

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-05-01

    Full Text Available We discuss when and why custom multi-factor risk models are warranted and give source code for computing some risk factors. Pension/mutual funds do not require customization but standardization. However, using standardized risk models in quant trading with much shorter holding horizons is suboptimal: (1 longer horizon risk factors (value, growth, etc. increase noise trades and trading costs; (2 arbitrary risk factors can neutralize alpha; (3 “standardized” industries are artificial and insufficiently granular; (4 normalization of style risk factors is lost for the trading universe; (5 diversifying risk models lowers P&L correlations, reduces turnover and market impact, and increases capacity. We discuss various aspects of custom risk model building.

  4. Using Learning Analytics to Enhance Student Learning in Online Courses Based on Quality Matters Standards

    Science.gov (United States)

    Martin, Florence; Ndoye, Abdou; Wilkins, Patricia

    2016-01-01

    Quality Matters is recognized as a rigorous set of standards that guide the designer or instructor to design quality online courses. We explore how Quality Matters standards guide the identification and analysis of learning analytics data to monitor and improve online learning. Descriptive data were collected for frequency of use, time spent, and…

  5. Challenges in the development of analytical soil compaction models

    DEFF Research Database (Denmark)

    Keller, Thomas; Lamandé, Mathieu

    2010-01-01

    Soil compaction can cause a number of environmental and agronomic problems (e.g. flooding, erosion, leaching of agrochemicals to recipient waters, emission of greenhouse gases to the atmosphere, crop yield losses), resulting in significant economic damage to society and agriculture. Strategies...... and recommendations for the prevention of soil compaction often rely on simulation models. This paper highlights some issues that need further consideration in order to improve soil compaction modelling, with the focus on analytical models. We discuss the different issues based on comparisons between experimental...... to stress propagation, an anomaly that needs further attention. We found large differences between soil stress-strain behaviour obtained from in situ measurements during wheeling experiments and those measured on cylindrical soil samples in standard laboratory tests. We concluded that the main reason...

  6. Standard Model at LHC 2016

    CERN Document Server

    2016-01-01

    The meeting aims to bring together experimentalists and theorists to discuss the phenomenology, observational results and theoretical tools for Standard Model physics at the LHC. The agenda is divided into four working groups: Electroweak physics Higgs physics QCD (hard, soft & PDFs) Top & flavour physics

  7. The standard model and beyond

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1989-05-01

    In these lectures, my aim is to present a status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows. I survey the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also commented on. In addition, I have included an appendix on dimensional regularization and a simple example which employs that technique. I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, extra Z' bosons, and compositeness are discussed. An overview of the physics of tau decays is also included. I discuss weak neutral current phenomenology and the extraction of sin 2 θW from experiment. The results presented there are based on a global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, implications for grand unified theories (GUTS), extra Z' gauge bosons, and atomic parity violation. The potential for further experimental progress is also commented on. Finally, I depart from the narrowest version of the standard model and discuss effects of neutrino masses, mixings, and electromagnetic moments. 32 refs., 3 figs., 5 tabs

  8. Beyond the Standard Model course

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The necessity for new physics beyond the Standard Model will be motivated. Theoretical problems will be exposed and possible solutions will be described. The goal is to present the exciting new physics ideas that will be tested in the near future, at LHC and elsewhere. Supersymmetry, grand unification, extra dimensions and a glimpse of string theory will be presented.

  9. Analytical standards production for the analysis of pomegranate anthocyanins by HPLC

    Directory of Open Access Journals (Sweden)

    Manuela Cristina Pessanha de Araújo Santiago

    2014-03-01

    Full Text Available Pomegranate (Punica granatum L. is a fruit with a long medicinal history, especially due to its phenolic compounds content, such as the anthocyanins, which are reported as one of the most important natural antioxidants. The analysis of the anthocyanins by high performance liquid chromatography (HPLC can be considered as an important tool to evaluate the quality of pomegranate juice. For research laboratories the major challenge in using HPLC for quantitative analyses is the acquisition of high purity analytical standards, since these are expensive and in some cases not even commercially available. The aim of this study was to obtain analytical standards for the qualitative and quantitative analysis of the anthocyanins from pomegranate. Five vegetable matrices (pomegranate flower, jambolan, jabuticaba, blackberry and strawberry fruits were used to isolate each of the six anthocyanins present in pomegranate fruit, using an analytical HPLC scale with non-destructive detection, it being possible to subsequently use them as analytical standards. Furthermore, their identities were confirmed by high resolution mass spectrometry. The proposed procedure showed that it is possible to obtain analytical standards of anthocyanins with a high purity grade (98.0 to 99.9% from natural sources, which was proved to be an economic strategy for the production of standards by laboratories according to their research requirements.

  10. Modular modelling with Physiome standards

    Science.gov (United States)

    Nickerson, David P.; Nielsen, Poul M. F.; Hunter, Peter J.

    2016-01-01

    Key points The complexity of computational models is increasing, supported by research in modelling tools and frameworks. But relatively little thought has gone into design principles for complex models.We propose a set of design principles for complex model construction with the Physiome standard modelling protocol CellML.By following the principles, models are generated that are extensible and are themselves suitable for reuse in larger models of increasing complexity.We illustrate these principles with examples including an architectural prototype linking, for the first time, electrophysiology, thermodynamically compliant metabolism, signal transduction, gene regulation and synthetic biology.The design principles complement other Physiome research projects, facilitating the application of virtual experiment protocols and model analysis techniques to assist the modelling community in creating libraries of composable, characterised and simulatable quantitative descriptions of physiology. Abstract The ability to produce and customise complex computational models has great potential to have a positive impact on human health. As the field develops towards whole‐cell models and linking such models in multi‐scale frameworks to encompass tissue, organ, or organism levels, reuse of previous modelling efforts will become increasingly necessary. Any modelling group wishing to reuse existing computational models as modules for their own work faces many challenges in the context of construction, storage, retrieval, documentation and analysis of such modules. Physiome standards, frameworks and tools seek to address several of these challenges, especially for models expressed in the modular protocol CellML. Aside from providing a general ability to produce modules, there has been relatively little research work on architectural principles of CellML models that will enable reuse at larger scales. To complement and support the existing tools and frameworks, we develop a set

  11. Standard errors and confidence intervals for correlations corrected for indirect range restriction: A simulation study comparing analytic and bootstrap methods.

    Science.gov (United States)

    Kennet-Cohen, Tamar; Kleper, Dvir; Turvall, Elliot

    2018-02-01

    A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies. © 2017 The British Psychological Society.

  12. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    Science.gov (United States)

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  13. Standardization guide for construction and use of MORT-type analytic trees

    Energy Technology Data Exchange (ETDEWEB)

    Buys, J.R.

    1992-02-01

    Since the introduction of MORT (Management Oversight and Risk Tree) technology as a tool for evaluating the success or failure of safety management systems, there has been a proliferation of analytic trees throughout US Department of Energy (DOE) and its contractor organizations. Standard fault tree'' symbols have generally been used in logic diagram or tree construction, but new or revised symbols have also been adopted by various analysts. Additionally, a variety of numbering systems have been used for event identification. The consequent lack of standardization has caused some difficulties in interpreting the trees and following their logic. This guide seeks to correct this problem by providing a standardized system for construction and use of analytic trees. Future publications of the DOE System Safety Development Center (SSDC) will adhere to this guide. It is recommended that other DOE organizations and contractors also adopt this system to achieve intra-DOE uniformity in analytic tree construction.

  14. D-brane Standard Model

    CERN Document Server

    Antoniadis, Ignatios; Tomaras, T N

    2001-01-01

    The minimal embedding of the Standard Model in type I string theory is described. The SU(3) color and SU(2) weak interactions arise from two different collections of branes. The correct prediction of the weak angle is obtained for a string scale of 6-8 TeV. Two Higgs doublets are necessary and proton stability is guaranteed. It predicts two massive vector bosons with masses at the TeV scale, as well as a new superweak interaction.

  15. The standard model and beyond

    International Nuclear Information System (INIS)

    Gaillard, M.K.

    1989-05-01

    The field of elementary particle, or high energy, physics seeks to identify the most elementary constituents of nature and to study the forces that govern their interactions. Increasing the energy of a probe in a laboratory experiment increases its power as an effective microscope for discerning increasingly smaller structures of matter. Thus we have learned that matter is composed of molecules that are in turn composed of atoms, that the atom consists of a nucleus surrounded by a cloud of electrons, and that the atomic nucleus is a collection of protons and neutrons. The more powerful probes provided by high energy particle accelerators have taught us that a nucleon is itself made of objects called quarks. The forces among quarks and electrons are understood within a general theoretical framework called the ''standard model,'' that accounts for all interactions observed in high energy laboratory experiments to date. These are commonly categorized as the ''strong,'' ''weak'' and ''electromagnetic'' interactions. In this lecture I will describe the standard model, and point out some of its limitations. Probing for deeper structures in quarks and electrons defines the present frontier of particle physics. I will discuss some speculative ideas about extensions of the standard model and/or yet more fundamental forces that may underlie our present picture. 11 figs., 1 tab

  16. Extensions of the standard model

    International Nuclear Information System (INIS)

    Ramond, P.

    1983-01-01

    In these lectures we focus on several issues that arise in theoretical extensions of the standard model. First we describe the kinds of fermions that can be added to the standard model without affecting known phenomenology. We focus in particular on three types: the vector-like completion of the existing fermions as would be predicted by a Kaluza-Klein type theory, which we find cannot be realistically achieved without some chiral symmetry; fermions which are vector-like by themselves, such as do appear in supersymmetric extensions, and finally anomaly-free chiral sets of fermions. We note that a chiral symmetry, such as the Peccei-Quinn symmetry can be used to produce a vector-like theory which, at scales less than M/sub W/, appears to be chiral. Next, we turn to the analysis of the second hierarchy problem which arises in Grand Unified extensions of the standard model, and plays a crucial role in proton decay of supersymmetric extensions. We review the known mechanisms for avoiding this problem and present a new one which seems to lead to the (family) triplication of the gauge group. Finally, this being a summer school, we present a list of homework problems. 44 references

  17. Consistency Across Standards or Standards in a New Business Model

    Science.gov (United States)

    Russo, Dane M.

    2010-01-01

    Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.

  18. Institutional model for supporting standardization

    International Nuclear Information System (INIS)

    Sanford, M.O.; Jackson, K.J.

    1993-01-01

    Restoring the nuclear option for utilities requires standardized designs. This premise is widely accepted by all parties involved in ALWR development activities. Achieving and maintaining standardization, however, demands new perspectives on the roles and responsibilities for the various commercial organizations involved in nuclear power. Some efforts are needed to define a workable model for a long-term support structure that will allow the benefits of standardization to be realized. The Nuclear Power Oversight Committee (NPOC) has developed a strategic plan that lays out the steps necessary to enable the nuclear industry to be in a position to order a new nuclear power plant by the mid 1990's. One of the key elements of the plan is the, ''industry commitment to standardization: through design certification, combined license, first-of-a-kind engineering, construction, operation, and maintenance of nuclear power plants.'' This commitment is a result of the recognition by utilities of the substantial advantages to standardization. Among these are economic benefits, licensing benefits from being treated as one of a family, sharing risks across a broader ownership group, sharing operating experiences, enhancing public safety, and a more coherent market force. Utilities controlled the construction of the past generation of nuclear units in a largely autonomous fashion procuring equipment and designs from a vendor, engineering services from an architect/engineer, and construction from a construction management firm. This, in addition to forcing the utility to assume virtually all of the risks associated with the project, typically resulted in highly customized designs based on preferences of the individual utility. However, the benefits of standardization can be realized only through cooperative choices and decision making by the utilities and through working as partners with reactor vendors, architect/engineers, and construction firms

  19. Modeling and analytical simulation of a smouldering carbonaceous ...

    African Journals Online (AJOL)

    Modeling and analytical simulation of a smouldering carbonaceous rod. A.A. Mohammed, R.O. Olayiwola, M Eseyin, A.A. Wachin. Abstract. Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. This paper presents an analytical method ...

  20. Solution standards for quality control of nuclear-material analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.

    1981-01-01

    Analytical chemistry measurement control depends upon reliable solution standards. At the Savannah River Plant Control Laboratory over a thousand analytical measurements are made daily for process control, product specification, accountability, and nuclear safety. Large quantities of solution standards are required for a measurement quality control program covering the many different analytical chemistry methods. Savannah River Plant produced uranium, plutonium, neptunium, and americium metals or oxides are dissolved to prepare stock solutions for working or Quality Control Standards (QCS). Because extensive analytical effort is required to characterize or confirm these solutions, they are prepared in large quantities. These stock solutions are diluted and blended with different chemicals and/or each other to synthesize QCS that match the matrices of different process streams. The target uncertainty of a standard's reference value is 10% of the limit of error of the methods used for routine measurements. Standard Reference Materials from NBS are used according to special procedures to calibrate the methods used in measuring the uranium and plutonium standards so traceability can be established. Special precautions are required to minimize the effects of temperature, radiolysis, and evaporation. Standard reference values are periodically corrected to eliminate systematic errors caused by evaporation or decay products. Measurement control is achieved by requiring analysts to analyze a blind QCS each shift a measurement system is used on plant samples. Computer evaluation determines whether or not a measurement is within the +- 3 sigma control limits. Monthly evaluations of the QCS measurements are made to determine current bias correction factors for accountability measurements and detect significant changes in the bias and precision statistics. The evaluations are also used to plan activities for improving the reliability of the analytical chemistry measurements

  1. Analytically solvable models of reaction-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Zemskov, E P; Kassner, K [Institut fuer Theoretische Physik, Otto-von-Guericke-Universitaet, Universitaetsplatz 2, 39106 Magdeburg (Germany)

    2004-05-01

    We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

  2. Analytical modeling of masonry infilled steel frames

    International Nuclear Information System (INIS)

    Flanagan, R.D.; Jones, W.D.; Bennett, R.M.

    1991-01-01

    A comprehensive program is underway at the Oak Ridge Y-12 Plant to evaluate the seismic capacity of unreinforced hollow clay tile infilled steel frames. This program has three major parts. First, preliminary numerical analyses are conducted to predict behavior, initial cracking loads, ultimate capacity loads, and to identify important parameters. Second, in-situ and laboratory tests are performed to obtain constitutive parameters and confirm predicted behavior. Finally, the analytical techniques are refined based on experimental results. This paper summarizes the findings of the preliminary numerical analyses. A review of current analytical methods was conducted and a subset of these methods was applied to known experimental results. Parametric studies were used to find the sensitivity of the behavior to various parameters. Both in-plane and out-of-plane loads were examined. Two types of out-of-plane behavior were examined, the inertial forces resulting from the mass of the infill panel and the out-of-plane forces resulting from interstory drift. Cracking loads were estimated using linear elastic analysis and an elliptical failure criterion. Calculated natural frequencies were correlated with low amplitude vibration testing. Ultimate behavior under inertial loads was estimated using a modified yield line procedure accounting for membrane stresses. The initial stiffness and ultimate capacity under in-plane loadings were predicted using finite element analyses. Results were compared to experimental data and to failure loads obtained using plastic collapse theory

  3. The standard model and beyond

    CERN Document Server

    Vergados, J D

    2017-01-01

    This book contains a systematic and pedagogical exposition of recent developments in particle physics and cosmology. It starts with two introductory chapters on group theory and the Dirac theory. Then it proceeds with the formulation of the Standard Model (SM) of Particle Physics, particle content and symmetries, fully exploiting the first chapters. It discusses the concept of gauge symmetries and emphasizes their role in particle physics. It then analyses the Higgs mechanism and the spontaneous symmetry breaking (SSB). It explains how the particles (gauge bosons and fermions) after SSB acquire a mass and get admixed. The various forms of charged currents are discussed in detail as well as how the parameters of the SM, which cannot be determined by the theory, are fixed by experiment, including the recent LHC data and the Higgs discovery. Quantum chromodynamics is discussed and various low energy approximations to it are presented. The Feynman diagrams are introduced and applied, in a way undertandable by fir...

  4. Analytic Models for Sunlight Charging of a Rapidly Spinning Satellite

    National Research Council Canada - National Science Library

    Tautz, Maurice

    2003-01-01

    ... photoelectrons can be blocked by local potential barriers. In this report, we discuss two analytic models for sunlight charging of a rapidly spinning spherical satellite, both of which are based on blocked photoelectron currents...

  5. Analytical Models Development of Compact Monopole Vortex Flows

    Directory of Open Access Journals (Sweden)

    Pavlo V. Lukianov

    2017-09-01

    Conclusions. The article contains series of the latest analytical models that describe both laminar and turbulent dynamics of monopole vortex flows which have not been reflected in traditional publications up to the present. The further research must be directed to search of analytical models for the coherent vortical structures in flows of viscous fluids, particularly near curved surfaces, where known in hydromechanics “wall law” is disturbed and heat and mass transfer anomalies take place.

  6. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  7. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  8. Non-commutative standard model: model building

    CERN Document Server

    Chaichian, Masud; Presnajder, P

    2003-01-01

    A non-commutative version of the usual electro-weak theory is constructed. We discuss how to overcome the two major problems: (1) although we can have non-commutative U(n) (which we denote by U sub * (n)) gauge theory we cannot have non-commutative SU(n) and (2) the charges in non-commutative QED are quantized to just 0,+-1. We show how the latter problem with charge quantization, as well as with the gauge group, can be resolved by taking the U sub * (3) x U sub * (2) x U sub * (1) gauge group and reducing the extra U(1) factors in an appropriate way. Then we proceed with building the non-commutative version of the standard model by specifying the proper representations for the entire particle content of the theory, the gauge bosons, the fermions and Higgs. We also present the full action for the non-commutative standard model (NCSM). In addition, among several peculiar features of our model, we address the inherentCP violation and new neutrino interactions. (orig.)

  9. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  10. Analytical modeling of nonradial expansion plumes

    Science.gov (United States)

    Boyd, Iain D.

    1990-01-01

    The 'Modified Simons' model presented allows the nonradial nature of axisymmetric rocket and thruster plume flowfields having a large exit Mach number and/or a large nozzle exit half-angle to be successfully predicted. The model is applied to monatomic and polyatomic gas (N, Ar, tetrafluoromethane) expansions; the nonradial density decay observed experimentally is successfully predicted.

  11. A simulation-based analytic model of radio galaxies

    Science.gov (United States)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  12. National Cancer Institute Biospecimen Evidence-Based Practices: a novel approach to pre-analytical standardization.

    Science.gov (United States)

    Engel, Kelly B; Vaught, Jim; Moore, Helen M

    2014-04-01

    Variable biospecimen collection, processing, and storage practices may introduce variability in biospecimen quality and analytical results. This risk can be minimized within a facility through the use of standardized procedures; however, analysis of biospecimens from different facilities may be confounded by differences in procedures and inferred biospecimen quality. Thus, a global approach to standardization of biospecimen handling procedures and their validation is needed. Here we present the first in a series of procedural guidelines that were developed and annotated with published findings in the field of human biospecimen science. The series of documents will be known as NCI Biospecimen Evidence-Based Practices, or BEBPs. Pertinent literature was identified via the National Cancer Institute (NCI) Biospecimen Research Database ( brd.nci.nih.gov ) and findings were organized by specific biospecimen pre-analytical factors and analytes of interest (DNA, RNA, protein, morphology). Meta-analysis results were presented as annotated summaries, which highlight concordant and discordant findings and the threshold and magnitude of effects when applicable. The detailed and adaptable format of the document is intended to support the development and execution of evidence-based standard operating procedures (SOPs) for human biospecimen collection, processing, and storage operations.

  13. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...

  14. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...

  15. Establishing the isolated Standard Model

    International Nuclear Information System (INIS)

    Wells, James D.; Zhang, Zhengkang; Zhao, Yue

    2017-02-01

    The goal of this article is to initiate a discussion on what it takes to claim ''there is no new physics at the weak scale,'' namely that the Standard Model (SM) is ''isolated.'' The lack of discovery of beyond the SM (BSM) physics suggests that this may be the case. But to truly establish this statement requires proving all ''connected'' BSM theories are false, which presents a significant challenge. We propose a general approach to quantitatively assess the current status and future prospects of establishing the isolated SM (ISM), which we give a reasonable definition of. We consider broad elements of BSM theories, and show many examples where current experimental results are not sufficient to verify the ISM. In some cases, there is a clear roadmap for the future experimental program, which we outline, while in other cases, further efforts - both theoretical and experimental - are needed in order to robustly claim the establishment of the ISM in the absence of new physics discoveries.

  16. Experiments beyond the standard model

    International Nuclear Information System (INIS)

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references

  17. Vacuum Stability of Standard Model^{++}

    CERN Document Server

    Anchordoqui, Luis A.; Goldberg, Haim; Huang, Xing; Lust, Dieter; Taylor, Tomasz R.; Vlcek, Brian

    2013-01-01

    The latest results of the ATLAS and CMS experiments point to a preferred narrow Higgs mass range (m_h \\simeq 124 - 126 GeV) in which the effective potential of the Standard Model (SM) develops a vacuum instability at a scale 10^{9} -10^{11} GeV, with the precise scale depending on the precise value of the top quark mass and the strong coupling constant. Motivated by this experimental situation, we present here a detailed investigation about the stability of the SM^{++} vacuum, which is characterized by a simple extension of the SM obtained by adding to the scalar sector a complex SU(2) singlet that has the quantum numbers of the right-handed neutrino, H", and to the gauge sector an U(1) that is broken by the vacuum expectation value of H". We derive the complete set of renormalization group equations at one loop. We then pursue a numerical study of the system to determine the triviality and vacuum stability bounds, using a scan of 10^4 random set of points to fix the initial conditions. We show that, if there...

  18. Establishing the isolated standard model

    Science.gov (United States)

    Wells, James D.; Zhang, Zhengkang; Zhao, Yue

    2017-07-01

    The goal of this article is to initiate a discussion on what it takes to claim "there is no new physics at the weak scale," namely that the Standard Model (SM) is "isolated." The lack of discovery of beyond the SM (BSM) physics suggests that this may be the case. But to truly establish this statement requires proving all "connected" BSM theories are false, which presents a significant challenge. We propose a general approach to quantitatively assess the current status and future prospects of establishing the isolated SM (ISM), which we give a reasonable definition of. We consider broad elements of BSM theories, and show many examples where current experimental results are not sufficient to verify the ISM. In some cases, there is a clear roadmap for the future experimental program, which we outline, while in other cases, further efforts—both theoretical and experimental—are needed in order to robustly claim the establishment of the ISM in the absence of new physics discoveries.

  19. Finite analytic method for modeling variably saturated flows.

    Science.gov (United States)

    Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

    2018-04-15

    This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  1. Analytical study of anisotropic compact star models

    Science.gov (United States)

    Ivanov, B. V.

    2017-11-01

    A simple classification is given of the anisotropic relativistic star models, resembling the one of charged isotropic solutions. On the ground of this database, and taking into account the conditions for physically realistic star models, a method is proposed for generating all such solutions. It is based on the energy density and the radial pressure as seeding functions. Numerous relations between the realistic conditions are found and the need for a graphic proof is reduced just to one pair of inequalities. This general formalism is illustrated with an example of a class of solutions with linear equation of state and simple energy density. It is found that the solutions depend on three free constants and concrete examples are given. Some other popular models are studied with the same method.

  2. Analytical study of anisotropic compact star models

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, B.V. [Bulgarian Academy of Science, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2017-11-15

    A simple classification is given of the anisotropic relativistic star models, resembling the one of charged isotropic solutions. On the ground of this database, and taking into account the conditions for physically realistic star models, a method is proposed for generating all such solutions. It is based on the energy density and the radial pressure as seeding functions. Numerous relations between the realistic conditions are found and the need for a graphic proof is reduced just to one pair of inequalities. This general formalism is illustrated with an example of a class of solutions with linear equation of state and simple energy density. It is found that the solutions depend on three free constants and concrete examples are given. Some other popular models are studied with the same method. (orig.)

  3. Meta-analytic structural equation modelling

    CERN Document Server

    Jak, Suzanne

    2015-01-01

    This book explains how to employ MASEM, the combination of meta-analysis (MA) and structural equation modelling (SEM). It shows how by using MASEM, a single model can be tested to explain the relationships between a set of variables in several studies. This book gives an introduction to MASEM, with a focus on the state of the art approach: the two stage approach of Cheung and Cheung & Chan. Both, the fixed and the random approach to MASEM are illustrated with two applications to real data. All steps that have to be taken to perform the analyses are discussed extensively. All data and syntax files are available online, so that readers can imitate all analyses. By using SEM for meta-analysis, this book shows how to benefit from all available information from all available studies, even if few or none of the studies report about all relationships that feature in the full model of interest.

  4. Analytical modeling of inverted annular film boiling

    International Nuclear Information System (INIS)

    Analytis, G.T.; Yadigaroglu, G.

    1987-01-01

    By employing a two-fluid formulation similar to the one used in the most recent LWR accident analysis codes, a model for the Inverted Annular Film Boiling region is developed. The conservation equations, together with appropriate closure relations are solved numerically. Successful comparisons are made between model predictions and heat transfer coefficient distributions measured in a series of single-tube reflooding experiments. Generally, the model predicts correctly the dependence of the heat transfer coefficient on liquid subcooling and flow rate; for some cases, however, heat transfer is still under-predicted, and an enhancement of the heat exchange from the liquid-vapour interface to the bulk of the liquid is required. The importance of the initial conditions at the quench front is also discussed. (orig.)

  5. Analytical modeling of inverted annular film boiling

    International Nuclear Information System (INIS)

    Analytis, G.T.; Yadigaroglu, G.

    1985-01-01

    By employing a two-fluid formulation similar to the one used in the most recent LWR accident analysis codes, a model for the Inverted Annular Film Boiling region is developed. The conservation equations, together with appropriate constitutive relations are solved numerically and successful comparisons are made between model predictions and heat transfer coefficient distributions measured in a series of single-tube reflooding experiments. The model predicts generally correctly the dependence of the heat transfer coefficient on liquid subcooling and flow rate, through, for some cases, heat transfer is still under-predicted, and an enhancement of the heat exchange from the liquid-vapour interface to the bulk of the liquid is required

  6. Enabling analytical and Modeling Tools for Enhanced Disease Surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Dawn K. Manley

    2003-04-01

    Early detection, identification, and warning are essential to minimize casualties from a biological attack. For covert attacks, sick people are likely to provide the first indication of an attack. An enhanced medical surveillance system that synthesizes distributed health indicator information and rapidly analyzes the information can dramatically increase the number of lives saved. Current surveillance methods to detect both biological attacks and natural outbreaks are hindered by factors such as distributed ownership of information, incompatible data storage and analysis programs, and patient privacy concerns. Moreover, because data are not widely shared, few data mining algorithms have been tested on and applied to diverse health indicator data. This project addressed both integration of multiple data sources and development and integration of analytical tools for rapid detection of disease outbreaks. As a first prototype, we developed an application to query and display distributed patient records. This application incorporated need-to-know access control and incorporated data from standard commercial databases. We developed and tested two different algorithms for outbreak recognition. The first is a pattern recognition technique that searches for space-time data clusters that may signal a disease outbreak. The second is a genetic algorithm to design and train neural networks (GANN) that we applied toward disease forecasting. We tested these algorithms against influenza, respiratory illness, and Dengue Fever data. Through this LDRD in combination with other internal funding, we delivered a distributed simulation capability to synthesize disparate information and models for earlier recognition and improved decision-making in the event of a biological attack. The architecture incorporates user feedback and control so that a user's decision inputs can impact the scenario outcome as well as integrated security and role-based access-control for communicating

  7. Unjamming in models with analytic pairwise potentials

    NARCIS (Netherlands)

    Kooij, S.; Lerner, E.

    Canonical models for studying the unjamming scenario in systems of soft repulsive particles assume pairwise potentials with a sharp cutoff in the interaction range. The sharp cutoff renders the potential nonanalytic but makes it possible to describe many properties of the solid in terms of the

  8. Analytical and numerical modeling for flexible pipes

    Science.gov (United States)

    Wang, Wei; Chen, Geng

    2011-12-01

    The unbonded flexible pipe of eight layers, in which all the layers except the carcass layer are assumed to have isotropic properties, has been analyzed. Specifically, the carcass layer shows the orthotropic characteristics. The effective elastic moduli of the carcass layer have been developed in terms of the influence of deformation to stiffness. With consideration of the effective elastic moduli, the structure can be properly analyzed. Also the relative movements of tendons and relative displacements of wires in helical armour layer have been investigated. A three-dimensional nonlinear finite element model has been presented to predict the response of flexible pipes under axial force and torque. Further, the friction and contact of interlayer have been considered. Comparison between the finite element model and experimental results obtained in literature has been given and discussed, which might provide practical and technical support for the application of unbonded flexible pipes.

  9. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  10. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  11. Multicenter validation of the analytical accuracy of Salmonella PCR: towards an international standard

    DEFF Research Database (Denmark)

    Malorny, B.; Hoorfar, Jeffrey; Bunge, C.

    2003-01-01

    As part of a major international project for the validation and standardization of PCR for detection of five major food-borne pathogens, four primer sets specific for Salmonella species were evaluated in-house for their analytical accuracy (selectivity and detection limit) in identifying 43 Salmo...... and is proposed as an international standard. This study addresses the increasing demand of quality assurance laboratories for standard diagnostic methods and presents findings that can facilitate the international comparison and exchange of epidemiological data.......As part of a major international project for the validation and standardization of PCR for detection of five major food-borne pathogens, four primer sets specific for Salmonella species were evaluated in-house for their analytical accuracy (selectivity and detection limit) in identifying 43...... of selectivity by using 364 strains showed that the inclusivity was 99.6% and the exclusivity was 100% for the invA primer set. To indicate possible PCR inhibitors derived from the sample DNA, an internal amplification control (IAC), which was coamplified with the invA target gene, was constructed...

  12. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  13. Analytical solutions of jam pattern formation on a ring for a class of optimal velocity traffic models

    DEFF Research Database (Denmark)

    Gaididei, Yuri Borisovich; Berkemer, Rainer; Caputo, Jean Guy

    2009-01-01

    are found analytically. Their velocity and amplitude are determined from a perturbation approach based on collective coordinates with the discrete modified Korteweg-de Vries equation as the zero order equation. This contains the standard OV model as a special case. The analytical results are in excellent...

  14. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  15. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  16. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a ''standard model''. The ''standard model'' consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the ''standard model'' to determine if the requirements of ''non-standard'' architectures can be met. Several possible extensions to the ''standard model'' are suggested including software as well as the hardware architectural feature

  17. Analytical model for minority games with evolutionary learning

    Science.gov (United States)

    Campos, Daniel; Méndez, Vicenç; Llebot, Josep E.; Hernández, Germán A.

    2010-06-01

    In a recent work [D. Campos, J.E. Llebot, V. Méndez, Theor. Popul. Biol. 74 (2009) 16] we have introduced a biological version of the Evolutionary Minority Game that tries to reproduce the intraspecific competition for limited resources in an ecosystem. In comparison with the complex decision-making mechanisms used in standard Minority Games, only two extremely simple strategies ( juveniles and adults) are accessible to the agents. Complexity is introduced instead through an evolutionary learning rule that allows younger agents to learn taking better decisions. We find that this game shows many of the typical properties found for Evolutionary Minority Games, like self-segregation behavior or the existence of an oscillation phase for a certain range of the parameter values. However, an analytical treatment becomes much easier in our case, taking advantage of the simple strategies considered. Using a model consisting of a simple dynamical system, the phase diagram of the game (which differentiates three phases: adults crowd, juveniles crowd and oscillations) is reproduced.

  18. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  19. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    Science.gov (United States)

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  1. An alternative to the standard model

    International Nuclear Information System (INIS)

    Baek, Seungwon; Ko, Pyungwon; Park, Wan-Il

    2014-01-01

    We present an extension of the standard model to dark sector with an unbroken local dark U(1) X symmetry. Including various singlet portal interactions provided by the standard model Higgs, right-handed neutrinos and kinetic mixing, we show that the model can address most of phenomenological issues (inflation, neutrino mass and mixing, baryon number asymmetry, dark matter, direct/indirect dark matter searches, some scale scale puzzles of the standard collisionless cold dark matter, vacuum stability of the standard model Higgs potential, dark radiation) and be regarded as an alternative to the standard model. The Higgs signal strength is equal to one as in the standard model for unbroken U(1) X case with a scalar dark matter, but it could be less than one independent of decay channels if the dark matter is a dark sector fermion or if U(1) X is spontaneously broken, because of a mixing with a new neutral scalar boson in the models

  2. The Cycle of Warfare - Analysis of an Analytical Model

    DEFF Research Database (Denmark)

    Jensen, Mikkel Storm

    2016-01-01

    The abstract has the title: “The Cycle of Warfare - Analysis of an Analytical Model” The Cycle of Warfare is an analytical model designed to illustrate the coherence between the organization, doctrine and technology of a military entity and the influence of the surrounding society as expressed...... both retrospectively and predictively. As a tool for historians the model can help to identify decisive factors in developments and outcomes. As a tool for intelligence analysts, it can be used predictively to identify likely possible outcomes or unknown elements in analysed entities....

  3. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  4. Elliptic-cylindrical analytical flux-rope model for ICMEs

    Science.gov (United States)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  5. Creatinine Assay Attainment of Analytical Performance Goals Following Implementation of IDMS Standardization

    Directory of Open Access Journals (Sweden)

    Elizabeth Sunmin Lee

    2017-02-01

    Full Text Available Background: The international initiative to standardize creatinine (Cr assays by tracing reference materials to Isotope Dilution Mass Spectrometry (IDMS assigned values was implemented to reduce interlaboratory variability and improve assay accuracy. Objective: The aims of this study were to examine whether IDMS standardization has improved Cr assay accuracy (bias, interlaboratory variability (precision, total error (TE, and attainment of recommended analytical performance goals. Methods: External Quality Assessment (EQA data (n = 66 challenge vials from Ontario, Canada, were analyzed. The bias, precision, TE, and the number of EQA challenge vials meeting performance goals were determined by assay manufacturer before (n = 32 and after (n = 34 IDMS implementation. Results: The challenge vials with the worst bias and precision were spiked with known common interfering substances (glucose and bilirubin. IDMS standardization improved assay bias (10.4%-1.6%, P < .001, but precision remained unchanged (5.0%-4.7%, P = .5 with performance goals not consistently being met. Precision and TE goals based on biologic variation were attained by only 29% to 69% and 32% to 62% of challenge vials. Conclusions: While IDMS standardization has improved Cr assay accuracy and thus reduced TE, significant interlaboratory variability remains. Contemporary Cr assays do not currently meet the standards required to allow for accurate and consistent estimated glomerular filtration rate assessment and chronic kidney disease diagnosis across laboratories. Further improvements in Cr assay performance are needed.

  6. Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ronald [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Neymark, Joel [J. Neymark & Associates; Kennedy, Mike D. [Mike D. Kennedy, Inc.; Gall, J. [AAON, Inc.; Henninger, R. [GARD Analytics, Inc.; Hong, T. [Lawrence Berkeley National Laboratory; Knebel, D. [AAON, Inc.; McDowell, T. [Thermal Energy System Specialists, LLC; Witte, M. [GARD Analytics, Inc.; Yan, D. [Tsinghua University; Zhou, X. [Tsinghua University

    2017-08-07

    This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area of modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.

  7. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  8. Quality model for semantic IS standards

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert

    2011-01-01

    Semantic IS (Information Systems) standards are essential for achieving interoperability between organizations. However a recent survey suggests that not the full benefits of standards are achieved, due to the quality issues. This paper presents a quality model for semantic IS standards, that should

  9. An analytical model for the assessment of airline expansion strategies

    Directory of Open Access Journals (Sweden)

    Mauricio Emboaba Moreira

    2014-01-01

    Full Text Available Purpose: The purpose of this article is to develop an analytical model to assess airline expansion strategies by combining generic business strategy models with airline business models. Methodology and approach: A number of airline business models are examined, as are Porter’s (1983 industry five forces that drive competition, complemented by Nalebuff/ Brandenburger’s  (1996 sixth force, and the basic elements of the general environment in which the expansion process takes place.  A system of points and weights is developed to create a score among the 904,736 possible combinations considered. The model’s outputs are generic expansion strategies with quantitative assessments for each specific combination of elements inputted. Originality and value: The analytical model developed is original because it combines for the first time and explicitly elements of the general environment, industry environment, airline business models and the generic expansion strategy types. Besides it creates a system of scores that may be used to drive the decision process toward the choice of a specific strategic expansion path. Research implications: The analytical model may be adapted to other industries apart from the airline industry by substituting the element “airline business model” by other industries corresponding elements related to the different specific business models.

  10. An Analytical Model for Learning: An Applied Approach.

    Science.gov (United States)

    Kassebaum, Peter Arthur

    A mediated-learning package, geared toward non-traditional students, was developed for use in the College of Marin's cultural anthropology courses. An analytical model for learning was used in the development of the package, utilizing concepts related to learning objectives, programmed instruction, Gestalt psychology, cognitive psychology, and…

  11. MODEL ANALYTICAL NETWORK PROCESS (ANP DALAM PENGEMBANGAN PARIWISATA DI JEMBER

    Directory of Open Access Journals (Sweden)

    Sukidin Sukidin

    2015-04-01

    Full Text Available Abstrak    : Model Analytical Network Process (ANP dalam Pengembangan Pariwisata di Jember. Penelitian ini mengkaji kebijakan pengembangan pariwisata di Jember, terutama kebijakan pengembangan agrowisata perkebunan kopi dengan menggunakan Jember Fashion Carnival (JFC sebagai event marketing. Metode yang digunakan adalah soft system methodology dengan menggunakan metode analitis jaringan (Analytical Network Process. Penelitian ini menemukan bahwa pengembangan pariwisata di Jember masih dilakukan dengan menggunakan pendekatan konvensional, belum terkoordinasi dengan baik, dan lebih mengandalkan satu even (atraksi pariwisata, yakni JFC, sebagai lokomotif daya tarik pariwisata Jember. Model pengembangan konvensional ini perlu dirancang kembali untuk memperoleh pariwisata Jember yang berkesinambungan. Kata kunci: pergeseran paradigma, industry pariwisata, even pariwisata, agrowisata Abstract: Analytical Network Process (ANP Model in the Tourism Development in Jember. The purpose of this study is to conduct a review of the policy of tourism development in Jember, especially development policies for coffee plantation agro-tourism by using Jember Fashion Carnival (JFC as event marketing. The research method used is soft system methodology using Analytical Network Process. The result shows that the tourism development in Jember is done using a conventional approach, lack of coordination, and merely focus on a single event tourism, i.e. the JFC, as locomotive tourism attraction in Jember. This conventional development model needs to be redesigned to reach Jember sustainable tourism development. Keywords: paradigm shift, tourism industry, agro-tourism

  12. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  13. Synthesis and Analytical Centrifugation of Magnetic Model Colloids

    NARCIS (Netherlands)

    Luigjes, B.

    2012-01-01

    This thesis is a study of the preparation and thermodynamic properties of magnetic colloids. First, two types of magnetic model colloids are investigated: composite colloids and single-domain nanoparticles. Thermodynamics of magnetic colloids is studied using analytical centrifugation, including a

  14. Learning, Learning Analytics, Activity Visualisation and Open learner Model

    DEFF Research Database (Denmark)

    Bull, Susan; Kickmeier-Rust, Michael; Vatrapu, Ravi

    2013-01-01

    This paper draws on visualisation approaches in learning analytics, considering how classroom visualisations can come together in practice. We suggest an open learner model in situations where many tools and activity visualisations produce more visual information than can be readily interpreted....

  15. Foam for Enhanced Oil Recovery : Modeling and Analytical Solutions

    NARCIS (Netherlands)

    Ashoori, E.

    2012-01-01

    Foam increases sweep in miscible- and immiscible-gas enhanced oil recovery by decreasing the mobility of gas enormously. This thesis is concerned with the simulations and analytical solutions for foam flow for the purpose of modeling foam EOR in a reservoir. For the ultimate goal of upscaling our

  16. Does leaf chemistry differentially affect breakdown in tropical vs temperate streams? Importance of standardized analytical techniques to measure leaf chemistry

    Science.gov (United States)

    Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert

    2009-01-01

    Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...

  17. A New Analytical Model for Wind-Turbine Wakes

    Science.gov (United States)

    Bastankhah, Majid; Porté-Agel, Fernando

    2013-04-01

    The intention of this study is to propose and validate a simple and efficient analytical model for the prediction of the wake velocity downwind of a stand-alone wind-turbine. Extensive efforts have been carried out to model the wake region analytically. One of the most popular models, proposed by Jensen, assumes a top-hat distribution of the velocity deficit at any plane perpendicular to the wake. That model has been extensively used in the literature and commercial softwares, but it has two important limitations that should be pointed out: (a) Even though this model is supposed to satisfy momentum conservation, in reality mass conservation is only used to derive it; (b) the assumption of a top-hat distribution of the velocity deficit is expected to underestimate that deficit in the center of the wake, and overestimate it near the edge of the wake. In order to overcome the above-mentioned limitations, here we propose an alternative analytical model that satisfies both mass and momentum conservation, and assumes a Gaussian distribution of the velocity deficit. For this purpose, we apply momentum and mass conservation to two different control volumes which have been previously used in the context of analytical modeling of wakes. The velocity profiles obtained with our proposed model are in good agreement with large-eddy simulation data and experimental measurements. By contrast, the top hat models, as expected, clearly underestimate the velocity deficit at the center of the wake region and overestimate it near the edge of the wake.

  18. Cultural models of linguistic standardization

    Directory of Open Access Journals (Sweden)

    Dirk Geeraerts

    2016-02-01

    Full Text Available In line with well-known trends in cultural theory (see Burke et al., 2000, Cognitive Linguistics has stressed the idea that we think about social reality in terms of models – ‘cultural models’ or ‘folk theories’: from Holland & Quinn (1987 over Lakoff (1996 and Palmer (1996 to Dirven et al. (2001a, 2001b, Cognitive linguists have demonstrated how the technical apparatus of Cognitive Linguistics can be used to analyze how our conception of social reality is shaped by underlying patterns of thought. But if language is a social and cultural reality, what are the models that shape our conception of language? Specifically, what are the models that shape our thinking about language as a social phenomenon? What are the paradigms that we use to think about language, not primarily in terms of linguistic structure (as in Reddy 1979, but in terms of linguistic variation: models about the way in which language varieties are distributed over a language community and about the way in which such distribution should be evaluated?In this paper, I will argue that two basic models may be identified: a rationalist and a romantic one. I will chart the ways in which they interact, describe how they are transformed in the course of time, and explore how the models can be used in the analysis of actual linguistic variation.

  19. Analytical model for nonlinear piezoelectric energy harvesting devices

    International Nuclear Information System (INIS)

    Neiss, S; Goldschmidtboeing, F; M Kroener; Woias, P

    2014-01-01

    In this work we propose analytical expressions for the jump-up and jump-down point of a nonlinear piezoelectric energy harvester. In addition, analytical expressions for the maximum power output at optimal resistive load and the 3 dB-bandwidth are derived. So far, only numerical models have been used to describe the physics of a piezoelectric energy harvester. However, this approach is not suitable to quickly evaluate different geometrical designs or piezoelectric materials in the harvester design process. In addition, the analytical expressions could be used to predict the jump-frequencies of a harvester during operation. In combination with a tuning mechanism, this would allow the design of an efficient control algorithm to ensure that the harvester is always working on the oscillator's high energy attractor. (paper)

  20. Analytic solution of the Starobinsky model for inflation

    Energy Technology Data Exchange (ETDEWEB)

    Paliathanasis, Andronikos [Universidad Austral de Chile, Instituto de Ciencias Fisicas y Matematicas, Valdivia (Chile); Durban University of Technology, Institute of Systems Science, Durban (South Africa)

    2017-07-15

    We prove that the field equations of the Starobinsky model for inflation in a Friedmann-Lemaitre-Robertson-Walker metric constitute an integrable system. The analytical solution in terms of a Painleve series for the Starobinsky model is presented for the case of zero and nonzero spatial curvature. In both cases the leading-order term describes the radiation era provided by the corresponding higher-order theory. (orig.)

  1. Standard Model, Higgs Boson and What Next?

    Indian Academy of Sciences (India)

    IAS Admin

    RESONANCE | October 2012. GENERAL | ARTICLE. Standard Model is now known to be the basis of almost ALL of known physics except gravity. It is the dynamical theory of electromagnetism and the strong and weak nuclear forces. Standard Model has been constructed by generalizing the century-old electrodynamics of.

  2. Modeling in the Common Core State Standards

    Science.gov (United States)

    Tam, Kai Chung

    2011-01-01

    The inclusion of modeling and applications into the mathematics curriculum has proven to be a challenging task over the last fifty years. The Common Core State Standards (CCSS) has made mathematical modeling both one of its Standards for Mathematical Practice and one of its Conceptual Categories. This article discusses the need for mathematical…

  3. Beyond the Standard Model: Working group report

    Indian Academy of Sciences (India)

    tion within the 'Beyond the Standard Model' working group of WHEPP-6. These problems addressed various extensions of the Standard Model (SM) currently under consideration in the particle physics phenomenology community. Smaller subgroups were formed to focus on each of these problems. The progresstill the end ...

  4. Competency model and standards for media education

    Directory of Open Access Journals (Sweden)

    Gerhard TULODZIECKI

    2012-12-01

    Full Text Available In Germany, educational standards for key school subjects have been developed as a consequence of the results of international comparative studies like PISA. Subsequently, supporters of interdisciplinary fields such as media education have also started calling for goals in the form of competency models and standards. In this context a competency standard model for media education will be developed with regard to the discussion about media competence and media education. In doing so the development of a competency model and the formulation of standards is described consequently as a decision making process. In this process decisions have to be made on competence areas and competence aspects to structure the model, on criteria to differentiate certain levels of competence, on the number of competence levels, on the abstraction level of standard formulations and on the tasks to test the standards. It is shown that the discussion on media education as well as on competencies and standards provides different possibilities of structuring, emphasizing and designing a competence standard model. Against this background we describe and give reasons for our decisions and our competency standards model. At the same time our contribution is meant to initiate further developments, testing and discussion.

  5. A revisited standard solar model

    International Nuclear Information System (INIS)

    Casse, M.; Cahen, S.; Doom, C.

    1985-09-01

    Recent models of the Sun, including our own, based on canonical physics and featuring modern reaction rates and radiative opacities are presented. They lead to a presolar helium abundance of approximately 0.28 by mass, at variance with the value of 0.25 proposed by Bahcall et al. (1982, 1985), but in better agreement with the value found in the Orion nebula. Most models predict a neutrino counting rate greater than 6 SNU in the chlorine-argon detector, which is at least 3 times higher than the observed rate. The primordial helium abundance derived from the solar one, on the basis of recent models of helium production from the birth of the Galaxy to the birth of the sun, Ysub(P) approximately 0.26, is significantly higher than the value inferred from observations of extragalactic metal-poor nebulae (Y approximately 0.23). This indicates that the stellar production of helium is probably underestimated by the models considered

  6. Beyond the supersymmetric standard model

    International Nuclear Information System (INIS)

    Hall, L.J.

    1988-02-01

    The possibility of baryon number violation at the weak scale and an alternative primordial nucleosynthesis scheme arising from the decay of gravitations are discussed. The minimal low energy supergravity model is defined and a few of its features are described. Renormalization group scaling and flavor physics are mentioned

  7. Beyond the supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Hall, L.J.

    1988-02-01

    The possibility of baryon number violation at the weak scale and an alternative primordial nucleosynthesis scheme arising from the decay of gravitations are discussed. The minimal low energy supergravity model is defined and a few of its features are described. Renormalization group scaling and flavor physics are mentioned.

  8. Collaborative data analytics for smart buildings: opportunities and models

    DEFF Research Database (Denmark)

    Lazarova-Molnar, Sanja; Mohamed, Nader

    2018-01-01

    Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate...... models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance...... of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis...

  9. Analytical heat transfer modeling of a new radiation calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Obame Ndong, Elysée [Department of Industrial Engineering and Maintenance, University of Sciences and Technology of Masuku (USTM), BP 941 Franceville (Gabon); Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Gallot-Lavallée, Olivier [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Aitken, Frédéric, E-mail: frederic.aitken@g2elab.grenoble-inp.fr [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France)

    2016-06-10

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  10. Analytical Modeling for the Grating Eddy Current Displacement Sensors

    Directory of Open Access Journals (Sweden)

    Lv Chunfeng

    2015-02-01

    Full Text Available As a new type of displacement sensor, grating eddy current displacement sensor (GECDS combines traditional eddy current sensors and grating structure in one. The GECDS performs a wide range displacement measurement without precision reduction. This paper proposes an analytical modeling approach for the GECDS. The solution model is established in the Cartesian coordinate system, and the solving domain is limited to finite extents by using the truncated region eigenfunction expansion method. Based on the second order vector potential, expressions for the electromagnetic field as well as coil impedance related to the displacement can be expressed in closed-form. Theoretical results are then confirmed by experiments, which prove the suitability and effectiveness of the analytical modeling approach.

  11. Analytical heat transfer modeling of a new radiation calorimeter

    International Nuclear Information System (INIS)

    Obame Ndong, Elysée; Gallot-Lavallée, Olivier; Aitken, Frédéric

    2016-01-01

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  12. Roll levelling semi-analytical model for process optimization

    Science.gov (United States)

    Silvestre, E.; Garcia, D.; Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.

    2016-08-01

    Roll levelling is a primary manufacturing process used to remove residual stresses and imperfections of metal strips in order to make them suitable for subsequent forming operations. In the last years the importance of this process has been evidenced with the apparition of Ultra High Strength Steels with strength > 900 MPa. The optimal setting of the machine as well as a robust machine design has become critical for the correct processing of these materials. Finite Element Method (FEM) analysis is the widely used technique for both aspects. However, in this case, the FEM simulation times are above the admissible ones in both machine development and process optimization. In the present work, a semi-analytical model based on a discrete bending theory is presented. This model is able to calculate the critical levelling parameters i.e. force, plastification rate, residual stresses in a few seconds. First the semi-analytical model is presented. Next, some experimental industrial cases are analyzed by both the semi-analytical model and the conventional FEM model. Finally, results and computation times of both methods are compared.

  13. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    Science.gov (United States)

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  14. ADVAN-style analytical solutions for common pharmacokinetic models.

    Science.gov (United States)

    Abuhelwa, Ahmad Y; Foster, David J R; Upton, Richard N

    2015-01-01

    The analytical solutions to compartmental pharmacokinetic models are well known, but have not been presented in a form that easily allows for complex dosing regimen and changes in covariate/parameter values that may occur at discrete times within and/or between dosing intervals. Laplace transforms were used to derive ADVAN-style analytical solutions for 1, 2, and 3 compartment pharmacokinetic linear models of intravenous and first-order absorption drug administration. The equations calculate the change in drug amounts in each compartment of the model over a time interval (t; t = t2 - t1) accounting for any dose or covariate events acting in the time interval. The equations were coded in the R language and used to simulate the time-course of drug amounts in each compartment of the systems. The equations were validated against commercial software [NONMEM (Beal, Sheiner, Boeckmann, & Bauer, 2009)] output to assess their capability to handle both complex dosage regimens and the effect of changes in covariate/parameter values that may occur at discrete times within or between dosing intervals. For all tested pharmacokinetic models, the time-course of drug amounts using the ADVAN-style analytical solutions were identical to NONMEM outputs to at least four significant figures, confirming the validity of the presented equations. To our knowledge, this paper presents the ADVAN-style equations for common pharmacokinetic models in the literature for the first time. The presented ADVAN-style equations overcome obstacles to implementing the classical analytical solutions in software, and have speed advantages over solutions using differential equation solvers. The equations presented in this paper fill a gap in the pharmacokinetic literature, and it is expected that these equations will facilitate the investigation of useful open-source software for modelling pharmacokinetic data. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  16. Using Learning Analytics to Understand Scientific Modeling in the Classroom

    Directory of Open Access Journals (Sweden)

    David Quigley

    2017-11-01

    Full Text Available Scientific models represent ideas, processes, and phenomena by describing important components, characteristics, and interactions. Models are constructed across various scientific disciplines, such as the food web in biology, the water cycle in Earth science, or the structure of the solar system in astronomy. Models are central for scientists to understand phenomena, construct explanations, and communicate theories. Constructing and using models to explain scientific phenomena is also an essential practice in contemporary science classrooms. Our research explores new techniques for understanding scientific modeling and engagement with modeling practices. We work with students in secondary biology classrooms as they use a web-based software tool—EcoSurvey—to characterize organisms and their interrelationships found in their local ecosystem. We use learning analytics and machine learning techniques to answer the following questions: (1 How can we automatically measure the extent to which students’ scientific models support complete explanations of phenomena? (2 How does the design of student modeling tools influence the complexity and completeness of students’ models? (3 How do clickstreams reflect and differentiate student engagement with modeling practices? We analyzed EcoSurvey usage data collected from two different deployments with over 1,000 secondary students across a large urban school district. We observe large variations in the completeness and complexity of student models, and large variations in their iterative refinement processes. These differences reveal that certain key model features are highly predictive of other aspects of the model. We also observe large differences in student modeling practices across different classrooms and teachers. We can predict a student’s teacher based on the observed modeling practices with a high degree of accuracy without significant tuning of the predictive model. These results highlight

  17. Determination of perfluorinated compounds in human plasma and serum standard reference materials using independent analytical methods

    Energy Technology Data Exchange (ETDEWEB)

    Reiner, Jessica L. [National Institute of Standards and Technology, Analytical Chemistry Division, Gaithersburg, MD (United States); National Institute of Standards and Technology, Analytical Chemistry Division, Hollings Marine Laboratory, Charleston, SC (United States); Phinney, Karen W. [National Institute of Standards and Technology, Analytical Chemistry Division, Gaithersburg, MD (United States); Keller, Jennifer M. [National Institute of Standards and Technology, Analytical Chemistry Division, Hollings Marine Laboratory, Charleston, SC (United States)

    2011-11-15

    Perfluorinated compounds (PFCs) were measured in three National Institute of Standards and Technology (NIST) Standard Reference Materials (SRMs) (SRMs 1950 Metabolites in Human Plasma, SRM 1957 Organic Contaminants in Non-fortified Human Serum, and SRM 1958 Organic Contaminants in Fortified Human Serum) using two analytical approaches. The methods offer some independence, with two extraction types and two liquid chromatographic separation methods. The first extraction method investigated the acidification of the sample followed by solid-phase extraction (SPE) using a weak anion exchange cartridge. The second method used an acetonitrile extraction followed by SPE using a graphitized non-porous carbon cartridge. The extracts were separated using a reversed-phase C{sub 8} stationary phase and a pentafluorophenyl (PFP) stationary phase. Measured values from both methods for the two human serum SRMs, 1957 and 1958, agreed with reference values on the Certificates of Analysis. Perfluorooctane sulfonate (PFOS) values were obtained for the first time in human plasma SRM 1950 with good reproducibility among the methods (below 5% relative standard deviation). The nominal mass interference from taurodeoxycholic acid, which has caused over estimation of the amount of PFOS in biological samples, was separated from PFOS using the PFP stationary phase. Other PFCs were also detected in SRM 1950 and are reported. SRM 1950 can be used as a control material for human biomonitoring studies and as an aid to develop new measurement methods. (orig.)

  18. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling; O' Brien, James

    2016-11-01

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and

  19. An Analytical Tire Model with Flexible Carcass for Combined Slips

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2014-01-01

    Full Text Available The tire mechanical characteristics under combined cornering and braking/driving situations have significant effects on vehicle directional controls. The objective of this paper is to present an analytical tire model with flexible carcass for combined slip situations, which can describe tire behavior well and can also be used for studying vehicle dynamics. The tire forces and moments come mainly from the shear stress and sliding friction at the tread-road interface. In order to describe complicated tire characteristics and tire-road friction, some key factors are considered in this model: arbitrary pressure distribution; translational, bending, and twisting compliance of the carcass; dynamic friction coefficient; anisotropic stiffness properties. The analytical tire model can describe tire forces and moments accurately under combined slip conditions. Some important properties induced by flexible carcass can also be reflected. The structural parameters of a tire can be identified from tire measurements and the computational results using the analytical model show good agreement with test data.

  20. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  1. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non...... rotationally symmetric, and the rotor inflow fields are consistently assumed uniform. Expansion of stationary wake fields is believed to be significantly affected by meandering of wake deficits as e.g. described by the Dynamic Wake Meandering model. In the present context, this effect is approximately...... conditions are imposed), the present formulation of wake expansion is believed to underestimate wake expansion, because the analytical wake formulation dictates the wake expansion to behave as x1/3 with downstream distance, whereas wake expansion as primary controlled by wake meandering develops...

  2. Control system architecture: The standard and non-standard models

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a open-quotes standard modelclose quotes. The open-quotes standard modelclose quotes consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the open-quotes standard modelclose quotes to determine if the requirements of open-quotes non-standardclose quotes architectures can be met. Several possible extensions to the open-quotes standard modelclose quotes are suggested including software as well as the hardware architectural features

  3. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  4. Electroweak baryogenesis and the standard model

    International Nuclear Information System (INIS)

    Huet, P.

    1994-01-01

    Electroweak baryogenesis is addressed within the context of the standard model of particle physics. Although the minimal standard model has the means of fulfilling the three Sakharov's conditions, it falls short to explaining the making of the baryon asymmetry of the universe. In particular, it is demonstrated that the phase of the CKM mixing matrix is an, insufficient source of CP violation. The shortcomings of the standard model could be bypassed by enlarging the symmetry breaking sector and adding a new source of CP violation

  5. Two dimensional analytical model for a reconfigurable field effect transistor

    Science.gov (United States)

    Ranjith, R.; Jayachandran, Remya; Suja, K. J.; Komaragiri, Rama S.

    2018-02-01

    This paper presents two-dimensional potential and current models for a reconfigurable field effect transistor (RFET). Two potential models which describe subthreshold and above-threshold channel potentials are developed by solving two-dimensional (2D) Poisson's equation. In the first potential model, 2D Poisson's equation is solved by considering constant/zero charge density in the channel region of the device to get the subthreshold potential characteristics. In the second model, accumulation charge density is considered to get above-threshold potential characteristics of the device. The proposed models are applicable for the device having lightly doped or intrinsic channel. While obtaining the mathematical model, whole body area is divided into two regions: gated region and un-gated region. The analytical models are compared with technology computer-aided design (TCAD) simulation results and are in complete agreement for different lengths of the gated regions as well as at various supply voltage levels.

  6. Analytic solution of a five-direction radiation transport model

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1988-01-01

    In order to test certain spatial and angular dependent Monte Carlo biasing techniques, a one-dimensional, one energy, two-media, five-direction radiation transport model has been devised for which an analytic solution exists. Although this solution is too long to be conveniently expressed in an explicit form, it can be easily evaluated on the smallest of computers. This solution is discussed in this paper. 1 ref

  7. New analytically solvable models of relativistic point interactions

    International Nuclear Information System (INIS)

    Gesztesy, F.; Seba, P.

    1987-01-01

    Two new analytically solvable models of relativistic point interactions in one dimension (being natural extensions of the nonrelativistic δ-resp, δ'-interaction) are considered. Their spectral properties in the case of finitely many point interactions as well as in the periodic case are fully analyzed. Moreover the spectrum is explicitely determined in the case of independent, identically distributed random coupling constants and the analog of the Saxon and Huther conjecture concerning gaps in the energy spectrum of such systems is derived

  8. An analytical thermohydraulic model for discretely fractured geothermal reservoirs

    Science.gov (United States)

    Fox, Don B.; Koch, Donald L.; Tester, Jefferson W.

    2016-09-01

    In discretely fractured reservoirs such as those found in Enhanced/Engineered Geothermal Systems (EGS), knowledge of the fracture network is important in understanding the thermal hydraulics, i.e., how the fluid flows and the resulting temporal evolution of the subsurface temperature. The purpose of this study was to develop an analytical model of the fluid flow and heat transport in a discretely fractured network that can be used for a wide range of modeling applications and serve as an alternative analysis tool to more computationally intensive numerical codes. Given the connectivity and structure of a fracture network, the flow in the system was solved using a linear system of algebraic equations for the pressure at the nodes of the network. With the flow determined, the temperature in the fracture was solved by coupling convective heat transport in the fracture with one-dimensional heat conduction perpendicular to the fracture, employing the Green's function derived solution for a single discrete fracture. The predicted temperatures along the fracture surfaces from the analytical solution were compared to numerical simulations using the TOUGH2 reservoir code. Through two case studies, we showed the capabilities of the analytical model and explored the effect of uncertainty in the fracture apertures and network structure on thermal performance. While both sources of uncertainty independently produce large variations in production temperature, uncertainty in the network structure, whenever present, had a predominant influence on thermal performance.

  9. Comparison of analytical eddy current models using principal components analysis

    Science.gov (United States)

    Contant, S.; Luloff, M.; Morelli, J.; Krause, T. W.

    2017-02-01

    Monitoring the gap between the pressure tube (PT) and the calandria tube (CT) in CANDU® fuel channels is essential, as contact between the two tubes can lead to delayed hydride cracking of the pressure tube. Multifrequency transmit-receive eddy current non-destructive evaluation is used to determine this gap, as this method has different depths of penetration and variable sensitivity to noise, unlike single frequency eddy current non-destructive evaluation. An Analytical model based on the Dodd and Deeds solutions, and a second model that accounts for normal and lossy self-inductances, and a non-coaxial pickup coil, are examined for representing the response of an eddy current transmit-receive probe when considering factors that affect the gap response, such as pressure tube wall thickness and pressure tube resistivity. The multifrequency model data was analyzed using principal components analysis (PCA), a statistical method used to reduce the data set into a data set of fewer variables. The results of the PCA of the analytical models were then compared to PCA performed on a previously obtained experimental data set. The models gave similar results under variable PT wall thickness conditions, but the non-coaxial coil model, which accounts for self-inductive losses, performed significantly better than the Dodd and Deeds model under variable resistivity conditions.

  10. The making of the standard model

    NARCIS (Netherlands)

    Hooft, G. 't

    2007-01-01

    The standard model of particle physics is more than a model. It is a detailed theory that encompasses nearly all that is known about the subatomic particles and forces in a concise set of principles and equations. The extensive research that culminated in this model includes numerous small and

  11. Discrete symmetry breaking beyond the standard model

    NARCIS (Netherlands)

    Dekens, Wouter Gerard

    2015-01-01

    The current knowledge of elementary particles and their interactions is summarized in the Standard Model of particle physics. Practically all the predictions of this model, that have been tested, were confirmed experimentally. Nonetheless, there are phenomena which the model cannot explain. For

  12. Beyond the Standard Model for Montaneros

    CERN Document Server

    Bustamante, M; Ellis, John

    2010-01-01

    These notes cover (i) electroweak symmetry breaking in the Standard Model (SM) and the Higgs boson, (ii) alternatives to the SM Higgs boson} including an introduction to composite Higgs models and Higgsless models that invoke extra dimensions, (iii) the theory and phenomenology of supersymmetry, and (iv) various further beyond topics, including Grand Unification, proton decay and neutrino masses, supergravity, superstrings and extra dimensions.

  13. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  14. Is the Standard Model about to crater?

    CERN Multimedia

    Lane, Kenneth

    2015-01-01

    The Standard Model is coming under more and more pressure from experiments. New results from the analysis of LHC's Run 1 data show effects that, if confirmed, would be the signature of new interactions at the TeV scale.

  15. The standard model in a nutshell

    CERN Document Server

    Goldberg, Dave

    2017-01-01

    For a theory as genuinely elegant as the Standard Model--the current framework describing elementary particles and their forces--it can sometimes appear to students to be little more than a complicated collection of particles and ranked list of interactions. The Standard Model in a Nutshell provides a comprehensive and uncommonly accessible introduction to one of the most important subjects in modern physics, revealing why, despite initial appearances, the entire framework really is as elegant as physicists say. Dave Goldberg uses a "just-in-time" approach to instruction that enables students to gradually develop a deep understanding of the Standard Model even if this is their first exposure to it. He covers everything from relativity, group theory, and relativistic quantum mechanics to the Higgs boson, unification schemes, and physics beyond the Standard Model. The book also looks at new avenues of research that could answer still-unresolved questions and features numerous worked examples, helpful illustrat...

  16. Beyond the Standard Model (1/5)

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    After a critical discussion of the questions left unanswered by the Standard Model, I will review the main attemps to construct new theories. In particular, I will discuss grand unification, supersymmetry, technicolour, and theories with extra dimensions.

  17. Beyond the Standard Model (5/5)

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    After a critical discussion of the questions left unanswered by the Standard Model, I will review the main attemps to construct new theories. In particular, I will discuss grand unification, supersymmetry, technicolour, and theories with extra dimensions.

  18. Beyond the Standard Model (3/5)

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    After a critical discussion of the questions left unanswered by the Standard Model, I will review the main attemps to construct new theories. In particular, I will discuss grand unification, supersymmetry, technicolour, and theories with extra dimensions.

  19. Beyond the Standard Model (2/5)

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    After a critical discussion of the questions left unanswered by the Standard Model, I will review the main attemps to construct new theories. In particular, I will discuss grand unification, supersymmetry, technicolour, and theories with extra dimensions.

  20. Beyond the Standard Model (4/5)

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    After a critical discussion of the questions left unanswered by the Standard Model, I will review the main attemps to construct new theories. In particular, I will discuss grand unification, supersymmetry, technicolour, and theories with extra dimensions.

  1. From the standard model to dark matter

    International Nuclear Information System (INIS)

    Wilczek, F.

    1995-01-01

    The standard model of particle physics is marvelously successful. However, it is obviously not a complete or final theory. I shall argue here that the structure of the standard model gives some quite concrete, compelling hints regarding what lies beyond. Taking these hints seriously, one is led to predict the existence of new types of very weakly interacting matter, stable on cosmological time scales and produced with cosmologically interesting densities--that is, ''dark matter''. copyright 1995 American Institute of Physics

  2. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  3. A Hybrid Computational and Analytical Model of Irrigation Drip emitters

    Science.gov (United States)

    Narain, Jaya; Winter, V., Amos

    2017-11-01

    This paper details a hybrid computational and analytical model to predict the performance of inline pressure-compensating drip irrigation emitters, devices used to accurately meter water to crops. Flow rate is controlled in the emitter by directing the water through a tortuous path, and then through a variable resistor composed of a flexible membrane that deflects under changes in pressure, restricting the flow path. An experimentally validated computational fluid dynamics model was used to derive a resistance factor that characterizes flow behavior through a tortuous path. Expressions describing the bending mechanics of the membrane were combined with analytical fluid flow models to iteratively predict flow behavior through the variable resistor. The hybrid model reduces the computational time as compared to purely computational methods, lowering the time required to iterate and select optimal designs. The model was validated using three commercially available drip emitters, rated at 1.1, 2, and 3.8 L/hr. For each, the model accurately predicted flow rate versus pressure behavior within a 95% confidence interval of experimental data and accurately replicated the performance stated by the manufacturer. Jain Irrigation, NSF GRFP.

  4. Model and Analytic Processes for Export License Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.; Wood, Thomas W.; Daly, Don S.; Brothers, Alan J.; Sanfilippo, Antonio P.; Cook, Diane; Holder, Larry

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

  5. Collisionless magnetic reconnection: analytical model and PIC simulation comparison

    Directory of Open Access Journals (Sweden)

    V. Semenov

    2009-03-01

    Full Text Available Magnetic reconnection is believed to be responsible for various explosive processes in the space plasma including magnetospheric substorms. The Hall effect is proved to play a key role in the reconnection process. An analytical model of steady-state magnetic reconnection in a collisionless incompressible plasma is developed using the electron Hall MHD approximation. It is shown that the initial complicated system of equations may split into a system of independent equations, and the solution of the problem is based on the Grad-Shafranov equation for the magnetic potential. The results of the analytical study are further compared with a two-dimensional particle-in-cell simulation of reconnection. It is shown that both methods demonstrate a close agreement in the electron current and the magnetic and electric field structures obtained. The spatial scales of the acceleration region in the simulation and the analytical study are of the same order. Such features like particles trajectories and the in-plane electric field structure appear essentially similar in both models.

  6. HTS axial flux induction motor with analytic and FEA modeling

    International Nuclear Information System (INIS)

    Li, S.; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-01-01

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested

  7. Analytical local electron-electron interaction model potentials for atoms

    International Nuclear Information System (INIS)

    Neugebauer, Johannes; Reiher, Markus; Hinze, Juergen

    2002-01-01

    Analytical local potentials for modeling the electron-electron interaction in an atom reduce significantly the computational effort in electronic structure calculations. The development of such potentials has a long history, but some promising ideas have not yet been taken into account for further improvements. We determine a local electron-electron interaction potential akin to those suggested by Green et al. [Phys. Rev. 184, 1 (1969)], which are widely used in atom-ion scattering calculations, electron-capture processes, and electronic structure calculations. Generalized Yukawa-type model potentials are introduced. This leads, however, to shell-dependent local potentials, because the origin behavior of such potentials is different for different shells as has been explicated analytically [J. Neugebauer, M. Reiher, and J. Hinze, Phys. Rev. A 65, 032518 (2002)]. It is found that the parameters that characterize these local potentials can be interpolated and extrapolated reliably for different nuclear charges and different numbers of electrons. The analytical behavior of the corresponding localized Hartree-Fock potentials at the origin and at long distances is utilized in order to reduce the number of fit parameters. It turns out that the shell-dependent form of Green's potential, which we also derive, yields results of comparable accuracy using only one shell-dependent parameter

  8. Examination of fast reactor fuels, FBR analytical quality assurance standards and methods, and analytical methods development: irradiation tests. Progress report, April 1--June 30, 1976, and FY 1976

    International Nuclear Information System (INIS)

    Baker, R.D.

    1976-08-01

    Characterization of unirradiated and irradiated LMFBR fuels by analytical chemistry methods will continue, and additional methods will be modified and mechanized for hot cell application. Macro- and microexaminations will be made on fuel and cladding using the shielded electron microprobe, emission spectrograph, radiochemistry, gamma scanner, mass spectrometers, and other analytical facilities. New capabilities will be developed in gamma scanning, analyses to assess spatial distributions of fuel and fission products, mass spectrometric measurements of burnup and fission gas constituents and other chemical analyses. Microstructural analyses of unirradiated and irradiated materials will continue using optical and electron microscopy and autoradiographic and x-ray techniques. Analytical quality assurance standards tasks are designed to assure the quality of the chemical characterizations necessary to evaluate reactor components relative to specifications. Tasks include: (1) the preparation and distribution of calibration materials and quality control samples for use in quality assurance surveillance programs, (2) the development of and the guidance in the use of quality assurance programs for sampling and analysis, (3) the development of improved methods of analysis, and (4) the preparation of continuously updated analytical method manuals. Reliable analytical methods development for the measurement of burnup, oxygen-to-metal (O/M) ratio, and various gases in irradiated fuels is described

  9. Analytical models of optical response in one-dimensional semiconductors

    International Nuclear Information System (INIS)

    Pedersen, Thomas Garm

    2015-01-01

    The quantum mechanical description of the optical properties of crystalline materials typically requires extensive numerical computation. Including excitonic and non-perturbative field effects adds to the complexity. In one dimension, however, the analysis simplifies and optical spectra can be computed exactly. In this paper, we apply the Wannier exciton formalism to derive analytical expressions for the optical response in four cases of increasing complexity. Thus, we start from free carriers and, in turn, switch on electrostatic fields and electron–hole attraction and, finally, analyze the combined influence of these effects. In addition, the optical response of impurity-localized excitons is discussed. - Highlights: • Optical response of one-dimensional semiconductors including excitons. • Analytical model of excitonic Franz–Keldysh effect. • Computation of optical response of impurity-localized excitons

  10. Analytical modelling of hydrogen transport in reactor containments

    International Nuclear Information System (INIS)

    Manno, V.P.

    1983-09-01

    A versatile computational model of hydrogen transport in nuclear plant containment buildings is developed. The background and significance of hydrogen-related nuclear safety issues are discussed. A computer program is constructed that embodies the analytical models. The thermofluid dynamic formulation spans a wide applicability range from rapid two-phase blowdown transients to slow incompressible hydrogen injection. Detailed ancillary models of molecular and turbulent diffusion, mixture transport properties, multi-phase multicomponent thermodynamics and heat sink modelling are addressed. The numerical solution of the continuum equations emphasizes both accuracy and efficiency in the employment of relatively coarse discretization and long time steps. Reducing undesirable numerical diffusion is addressed. Problem geometry options include lumped parameter zones, one dimensional meshs, two dimensional Cartesian or axisymmetric coordinate systems and three dimensional Cartesian or cylindrical regions. An efficient lumped nodal model is included for simulation of events in which spatial resolution is not significant. Several validation calculations are reported

  11. Analytical expressions for transition edge sensor excess noise models

    International Nuclear Information System (INIS)

    Brandt, Daniel; Fraser, George W.

    2010-01-01

    Transition edge sensors (TESs) are high-sensitivity thermometers used in cryogenic microcalorimeters which exploit the steep gradient in resistivity with temperature during the superconducting phase transition. Practical TES devices tend to exhibit a white noise of uncertain origin, arising inside the device. We discuss two candidate models for this excess noise, phase slip shot noise (PSSN) and percolation noise. We extend the existing PSSN model to include a magnetic field dependence and derive a basic analytical model for percolation noise. We compare the predicted functional forms of the noise current vs. resistivity curves of both models with experimental data and provide a set of equations for both models to facilitate future experimental efforts to clearly identify the source of excess noise.

  12. Development of an analytical model to assess fuel property effects on combustor performance

    Science.gov (United States)

    Sutton, R. D.; Troth, D. L.; Miles, G. A.; Riddlebaugh, S. M.

    1987-01-01

    A generalized first-order computer model has been developed in order to analytically evaluate the potential effect of alternative fuels' effects on gas turbine combustors. The model assesses the size, configuration, combustion reliability, and durability of the combustors required to meet performance and emission standards while operating on a broad range of fuels. Predictions predicated on combustor flow-field determinations by the model indicate that fuel chemistry, as defined by hydrogen content, exerts a significant influence on flame retardation, liner wall temperature, and smoke emission.

  13. Analytical model of reactive transport processes with spatially variable coefficients.

    Science.gov (United States)

    Simpson, Matthew J; Morrow, Liam C

    2015-05-01

    Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems.

  14. Analytical modeling of glucose biosensors based on carbon nanotubes.

    Science.gov (United States)

    Pourasl, Ali H; Ahmadi, Mohammad Taghi; Rahmani, Meisam; Chin, Huei Chaeng; Lim, Cheng Siong; Ismail, Razali; Tan, Michael Loong Peng

    2014-01-15

    In recent years, carbon nanotubes have received widespread attention as promising carbon-based nanoelectronic devices. Due to their exceptional physical, chemical, and electrical properties, namely a high surface-to-volume ratio, their enhanced electron transfer properties, and their high thermal conductivity, carbon nanotubes can be used effectively as electrochemical sensors. The integration of carbon nanotubes with a functional group provides a good and solid support for the immobilization of enzymes. The determination of glucose levels using biosensors, particularly in the medical diagnostics and food industries, is gaining mass appeal. Glucose biosensors detect the glucose molecule by catalyzing glucose to gluconic acid and hydrogen peroxide in the presence of oxygen. This action provides high accuracy and a quick detection rate. In this paper, a single-wall carbon nanotube field-effect transistor biosensor for glucose detection is analytically modeled. In the proposed model, the glucose concentration is presented as a function of gate voltage. Subsequently, the proposed model is compared with existing experimental data. A good consensus between the model and the experimental data is reported. The simulated data demonstrate that the analytical model can be employed with an electrochemical glucose sensor to predict the behavior of the sensing mechanism in biosensors.

  15. An analytical model for enantioseparation process in capillary electrophoresis

    Science.gov (United States)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  16. Exploring SMBH assembly with semi-analytic modelling

    Science.gov (United States)

    Ricarte, Angelo; Natarajan, Priyamvada

    2018-02-01

    We develop a semi-analytic model to explore different prescriptions of supermassive black hole (SMBH) fuelling. This model utilizes a merger-triggered burst mode in concert with two possible implementations of a long-lived steady mode for assembling the mass of the black hole in a galactic nucleus. We improve modelling of the galaxy-halo connection in order to more realistically determine the evolution of a halo's velocity dispersion. We use four model variants to explore a suite of observables: the M•-σ relation, mass functions of both the overall and broad-line quasar population, and luminosity functions as a function of redshift. We find that `downsizing' is a natural consequence of our improved velocity dispersion mappings, and that high-mass SMBHs assemble earlier than low-mass SMBHs. The burst mode of fuelling is sufficient to explain the assembly of SMBHs to z = 2, but an additional steady mode is required to both assemble low-mass SMBHs and reproduce the low-redshift luminosity function. We discuss in detail the trade-offs in matching various observables and the interconnected modelling components that govern them. As a result, we demonstrate the utility as well as the limitations of these semi-analytic techniques.

  17. Working group report: Beyond the standard model

    Indian Academy of Sciences (India)

    The working group on Beyond the Standard Model concentrated on identifying interesting physics issues in models ... In view of the range of current interest in the high energy physics community, this work- ing group was organised ... the computational tools currently relevant for particle phenomenology. Thus in this group,.

  18. Standard Model Particles from Split Octonions

    Directory of Open Access Journals (Sweden)

    Gogberashvili M.

    2016-01-01

    Full Text Available We model physical signals using elements of the algebra of split octonions over the field of real numbers. Elementary particles are corresponded to the special elements of the algebra that nullify octonionic norms (zero divisors. It is shown that the standard model particle spectrum naturally follows from the classification of the independent primitive zero divisors of split octonions.

  19. Exploring the Standard Model of Particles

    Science.gov (United States)

    Johansson, K. E.; Watkins, P. M.

    2013-01-01

    With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…

  20. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  1. Noncommutative geometry and the standard model vacuum

    International Nuclear Information System (INIS)

    Barrett, John W.; Dawe Martins, Rachel A.

    2006-01-01

    The space of Dirac operators for the Connes-Chamseddine spectral action for the standard model of particle physics coupled to gravity is studied. The model is extended by including right-handed neutrino states, and the S 0 -reality axiom is not assumed. The possibility of allowing more general fluctuations than the inner fluctuations of the vacuum is proposed. The maximal case of all possible fluctuations is studied by considering the equations of motion for the vacuum. While there are interesting nontrivial vacua with Majorana-type mass terms for the leptons, the conclusion is that the equations are too restrictive to allow solutions with the standard model mass matrix

  2. An efficient analytical model for baffled, multi-celled membrane-type acoustic metamaterial panels

    Science.gov (United States)

    Langfeldt, F.; Gleine, W.; von Estorff, O.

    2018-03-01

    A new analytical model for the oblique incidence sound transmission loss prediction of baffled panels with multiple subwavelength sized membrane-type acoustic metamaterial (MAM) unit cells is proposed. The model employs a novel approach via the concept of the effective surface mass density and approximates the unit cell vibrations in the form of piston-like displacements. This yields a coupled system of linear equations that can be solved efficiently using well-known solution procedures. A comparison with results from finite element model simulations for both normal and diffuse field incidence shows that the analytical model delivers accurate results as long as the edge length of the MAM unit cells is smaller than half the acoustic wavelength. The computation times for the analytical calculations are 100 times smaller than for the numerical simulations. In addition to that, the effect of flexible MAM unit cell edges compared to the fixed edges assumed in the analytical model is studied numerically. It is shown that the compliance of the edges has only a small impact on the transmission loss of the panel, except at very low frequencies in the stiffness-controlled regime. The proposed analytical model is applied to investigate the effect of variations of the membrane prestress, added mass, and mass eccentricity on the diffuse transmission loss of a MAM panel with 120 unit cells. Unlike most previous investigations of MAMs, these results provide a better understanding of the acoustic performance of MAMs under more realistic conditions. For example, it is shown that by varying these parameters deliberately in a checkerboard pattern, a new anti-resonance with large transmission loss values can be introduced. A random variation of these parameters, on the other hand, is shown to have only little influence on the diffuse transmission loss, as long as the standard deviation is not too large. For very large random variations, it is shown that the peak transmission loss

  3. A hybrid finite-difference and analytic element groundwater model

    Science.gov (United States)

    Haitjema, Henk M.; Feinstein, Daniel T.; Hunt, Randall J.; Gusyev, Maksym

    2010-01-01

    Regional finite-difference models tend to have large cell sizes, often on the order of 1–2 km on a side. Although the regional flow patterns in deeper formations may be adequately represented by such a model, the intricate surface water and groundwater interactions in the shallower layers are not. Several stream reaches and nearby wells may occur in a single cell, precluding any meaningful modeling of the surface water and groundwater interactions between the individual features. We propose to replace the upper MODFLOW layer or layers, in which the surface water and groundwater interactions occur, by an analytic element model (GFLOW) that does not employ a model grid; instead, it represents wells and surface waters directly by the use of point-sinks and line-sinks. For many practical cases it suffices to provide GFLOW with the vertical leakage rates calculated in the original coarse MODFLOW model in order to obtain a good representation of surface water and groundwater interactions. However, when the combined transmissivities in the deeper (MODFLOW) layers dominate, the accuracy of the GFLOW solution diminishes. For those cases, an iterative coupling procedure, whereby the leakages between the GFLOW and MODFLOW model are updated, appreciably improves the overall solution, albeit at considerable computational cost. The coupled GFLOW–MODFLOW model is applicable to relatively large areas, in many cases to the entire model domain, thus forming an attractive alternative to local grid refinement or inset models.

  4. Mathematical Model of Suspension Filtration and Its Analytical Solution

    Directory of Open Access Journals (Sweden)

    Normahmad Ravshanov

    2013-01-01

    Full Text Available The work develops advanced mathematical model and computing algorithm to analyze, predict and identify the basic parameters of filter units and their variation ranges. Numerical analytic solution of liquid ionized mixtures filtration was got on their basis. Computing experiments results are presented in graphics form. Calculation results analysis enables to determine the optimum performance of filter units, used for liquid ionized mixtures filtration, food preparation, drug production and water purification. Selection of the most suitable parameters contributes to the improvement of economic and technological efficiency of production and filter units working efficiency.

  5. Shear mechanical properties of the spleen: experiment and analytical modelling.

    Science.gov (United States)

    Nicolle, S; Noguer, L; Palierne, J-F

    2012-05-01

    This paper aims at providing the first shear mechanical properties of spleen tissue. Rheometric tests on porcine splenic tissues were performed in the linear and nonlinear regime, revealing a weak frequency dependence of the dynamic moduli in linear regime and a distinct strain-hardening effect in nonlinear regime. These behaviours are typical of soft tissues such as kidney and liver, with however a less pronounced strain-hardening for the spleen. An analytical model based on power laws is then proposed to describe the general shear viscoelastic behaviour of the spleen. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  7. Analytical model of corn cob Pyroprobe-FTIR data

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jie; YuHong, Qin [Key Laboratory of Coal Science and Technology, Taiyuan University of Technology, Taiyuan, Shanxi 030024 (China); Green, Alex E.S. [Clean Combustion Technology Laboratory, University of Florida, Gainesville, FL 32611-6550 (United States)

    2006-05-15

    Pyrolysis of various forms of biomass could convert this primary energy source into valuable liquid or gaseous fuels or chemicals. In this study a CDS 2000 Pyroprobe, with a Bio-Rad FTS165 FTIR detector are used to measure yields of 3 products and 7 families of products from corn cobs pyrolysis at temperatures up to 900{sup o}C using a wide range of heating rates. An analytical semi-empirical model is then used to approximately represent these results using a relatively small number of parameters. The compact representation can be used in applications to conveniently extrapolate and interpolate these results to other temperatures and heating rates. (author)

  8. Multicriteria evaluation of power plants impact on the living standard using the analytic hierarchy process

    International Nuclear Information System (INIS)

    Chatzimouratidis, Athanasios I.; Pilavachi, Petros A.

    2008-01-01

    The purpose of this study is to evaluate 10 types of power plants available at present including fossil fuel, nuclear as well as renewable-energy-based power plants, with regard to their overall impact on the living standard of local communities. Both positive and negative impacts of power plant operation are considered using the analytic hierarchy process (AHP). The current study covers the set of criteria weights considered typical for many local communities in many developed countries. The results presented here are illustrative only and user-defined weighting is required to make this study valuable for a specific group of users. A sensitivity analysis examines the most important weight variations, thus giving an overall view of the problem evaluation to every decision maker. Regardless of criteria weight variations, the five types of renewable energy power plant rank in the first five positions. Nuclear plants are in the sixth position when priority is given to quality of life and last when socioeconomic aspects are valued more important. Natural gas, oil and coal/lignite power plants rank between sixth and tenth position having slightly better ranking under priority to socioeconomic aspects

  9. The Standard Model and Higgs physics

    Science.gov (United States)

    Torassa, Ezio

    2018-05-01

    The Standard Model is a consistent and computable theory that successfully describes the elementary particle interactions. The strong, electromagnetic and weak interactions have been included in the theory exploiting the relation between group symmetries and group generators, in order to smartly introduce the force carriers. The group properties lead to constraints between boson masses and couplings. All the measurements performed at the LEP, Tevatron, LHC and other accelerators proved the consistency of the Standard Model. A key element of the theory is the Higgs field, which together with the spontaneous symmetry breaking, gives mass to the vector bosons and to the fermions. Unlike the case of vector bosons, the theory does not provide prediction for the Higgs boson mass. The LEP experiments, while providing very precise measurements of the Standard Model theory, searched for the evidence of the Higgs boson until the year 2000. The discovery of the top quark in 1994 by the Tevatron experiments and of the Higgs boson in 2012 by the LHC experiments were considered as the completion of the fundamental particles list of the Standard Model theory. Nevertheless the neutrino oscillations, the dark matter and the baryon asymmetry in the Universe evidence that we need a new extended model. In the Standard Model there are also some unattractive theoretical aspects like the divergent loop corrections to the Higgs boson mass and the very small Yukawa couplings needed to describe the neutrino masses. For all these reasons, the hunt of discrepancies between Standard Model and data is still going on with the aim to finally describe the new extended theory.

  10. The Cosmological Standard Model and Its Implications for Beyond the Standard Model of Particle Physics

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    While the cosmological standard model has many notable successes, it assumes 95% of the mass-energy density of the universe is dark and of unknown nature, and there was an early stage of inflationary expansion driven by physics far beyond the range of the particle physics standard model. In the colloquium I will discuss potential particle-physics implications of the standard cosmological model.

  11. Analytic models of CMOS logic in various regimes

    Directory of Open Access Journals (Sweden)

    Dokić Branko

    2014-01-01

    Full Text Available In this paper, comparative analytic models of static and dynamic characteristics of CMOS digital circuits in strong, weak and mixed inversion regime have been described. Term mixed inversion is defined for the first time. The paper shows that there is an analogy in behavior and functional dependencies of parameters in all three CMOS regimes. Comparative characteristics of power consumption and speed in static regimes are given. Dependency of threshold voltage and logic delay time on temperature has been analyzed. Dynamic model with constant current is proposed. It is shown that digital circuits with dynamic threshold voltage of MOS transistor (DT-CMOS have better logic delay characteristics. The analysis is based on simplified current-voltage MOS transistor models in strong and weak inversion regimes, as well as PSPICE software using 180 nm technology parameters.

  12. Analytic model of Applied-B ion diode impedance behavior

    International Nuclear Information System (INIS)

    Miller, P.A.; Mendel, C.W. Jr.

    1987-01-01

    An empirical analysis of impedance data from Applied-B ion diodes used in seven inertial confinement fusion research experiments was published recently. The diodes all operated with impedance values well below the Child's-law value. The analysis uncovered an unusual unifying relationship among data from the different experiments. The analysis suggested that closure of the anode-cathode gap by electrode plasma was not a dominant factor in the experiments, but was not able to elaborate the underlying physics. Here we present a new analytic model of Applied-B ion diodes coupled to accelerators. A critical feature of the diode model is based on magnetic insulation theory. The model successfully describes impedance behavior of these diodes and supports stimulating new viewpoints of the physics of Applied-B ion diode operation

  13. Analytical Model of Symmetric Halo Doped DG-Tunnel FET

    Directory of Open Access Journals (Sweden)

    S. Nagarajan

    2015-11-01

    Full Text Available Two-dimensional analytical model of symmetric halo doped double gate tunnel field effect transistor has been presented in this work. This model is developed based on the 2-D Poisson’s equation. Some important parameters such that surface potential, vertical and lateral electric field, electric field intensity and band energy have been modelled. The doping concentration and length of halo regions are varied and dependency of various parameters is studied. The halo doping is imparted to improve the ON current and to reduce the intrinsic ambipolarity of the device. Hence we can achieve improved ION/IOFF ratio. The scaling property of halo doped structure is analyzed with various dielectric constants.

  14. Simplified analytical model for radionuclide transport simulation in the geosphere

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1996-01-01

    In order to evaluate postclosure off-site doses from a low-level radioactive waste disposal facilities, an integrated safety assessment methodology has being developed at Instituto de Pesquisas Energeticas e Nucleares. The source-term modelling approach adopted in this system is described and the results obtained in the IAEA NSARS 'The Safety Assessment of Near-Surface Radioactive Waste Disposal Facilities' programme for model intercomparison studies are presented. The radionuclides released from the waste are calculated using a simple first order kinetics model, and the transport, through porous media below the waste is determined by using an analytical solution of the mass transport equation. The methodology and the results obtained in this work are compared with those reported by others participants of the NSARS programme. (author). 4 refs., 4 figs

  15. LHC Higgs physics beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Spannowsky, M.

    2007-09-22

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan {beta} in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  16. LHC Higgs physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Spannowsky, M.

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan β in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  17. CP Violation Beyond the Standard Model

    CERN Document Server

    Fleischer, Robert

    1997-01-01

    Recent developments concerning CP violation beyond the Standard Model are reviewed. The central target of this presentation is the $B$ system, as it plays an outstanding role in the extraction of CKM phases. Besides a general discussion of the appearance of new physics in the corresponding CP-violating asymmetries through $B^0_q$--$\\bar{B^0_q}$ mixing $(q\\in\\{d,s\\})$, it is emphasized that CP violation in non-leptonic penguin modes, e.g. in $B_d\\to\\phi K_{S}$, offers a powerful tool to probe physics beyond the Standard Model. In this respect $B\\to\\pi K$ modes, which have been observed recently by the CLEO collaboration, may also turn out to be very useful. Their combined branching ratios allow us to constrain the CKM angle $\\gamma$ and may indicate the presence of physics beyond the Standard Model.

  18. Analytical Model based on Green Criteria for Optical Backbone Network Interconnection

    DEFF Research Database (Denmark)

    Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Pedersen, Jens Myrup

    2011-01-01

    to the evaluation of the environmental impact of networks from physical interconnection point of view. Networks deployment, usage, and disposal are analyzed as contributing elements to ICT’s (Information and Communications Technology) CO2 emissions. This paper presents an analytical model for evaluating...... for backbone’s interconnection, since minimization of CO2 emissions is becoming an important factor. In addition, two case studies are presented to illustrate the use and application of this model, and the need for de facto and international standards to reduce CO2 emissions through good network planning....

  19. Industrial diffusion models and technological standardization

    International Nuclear Information System (INIS)

    Carrillo-Hermosilla, J.

    2007-01-01

    Conventional models of technology diffusion have typically focused on the question of the rate of diffusion at which one new technology is fully adopted. The model described here provides a broader approach, from the perspective the extension of the diffusion of multiple technologies, and the related phenomenon of standardization. Moreover, most conventional research has characterized the diffusion process in terms of technology attributes or adopting firms attributes. Alternatively, we propose here a wide-ranging and consistent taxonomy of the relationships between the circumstances of an industry and the attributes of the technology standardization processes taking place within it. (Author) 100 refs

  20. Validated Analytical Model of a Pressure Compensation Drip Irrigation Emitter

    Science.gov (United States)

    Shamshery, Pulkit; Wang, Ruo-Qian; Taylor, Katherine; Tran, Davis; Winter, Amos

    2015-11-01

    This work is focused on analytically characterizing the behavior of pressure-compensating drip emitters in order to design low-cost, low-power irrigation solutions appropriate for off-grid communities in developing countries. There are 2.5 billion small acreage farmers worldwide who rely solely on their land for sustenance. Drip, compared to flood, irrigation leads to up to 70% reduction in water consumption while increasing yields by 90% - important in countries like India which are quickly running out of water. To design a low-power drip system, there is a need to decrease the pumping pressure requirement at the emitters, as pumping power is the product of pressure and flow rate. To efficiently design such an emitter, the relationship between the fluid-structure interactions that occur in an emitter need to be understood. In this study, a 2D analytical model that captures the behavior of a common drip emitter was developed and validated through experiments. The effects of independently changing the channel depth, channel width, channel length and land height on the performance were studied. The model and the key parametric insights presented have the potential to be optimized in order to guide the design of low-pressure, clog-resistant, pressure-compensating emitters.

  1. Analytic Models of Brown Dwarfs and the Substellar Mass Limit

    Directory of Open Access Journals (Sweden)

    Sayantan Auddy

    2016-01-01

    Full Text Available We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064M⊙–0.087M⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr.

  2. Characterization of uniform scanning proton beams with analytical models

    Science.gov (United States)

    Demez, Nebi

    Tissue equivalent phantoms have an important place in radiation therapy planning and delivery. They have been manufactured for use in conventional radiotherapy. Their tissue equivalency for proton beams is currently in active investigation. The Bragg-Kleeman rule was used to calculate water equivalent thickness (WET) for available tissue equivalent phantoms from CIRS (Norfolk, VA, USA). WET's of those phantoms were also measured using proton beams at Hampton University Proton Therapy Institute (HUPTI). WET measurements and calculations are in good agreement within ˜1% accuracy except for high Z phantoms. Proton beams were also characterized with an analytical proton dose calculation model, Proton Loss Model (PLM) [26], to investigate protons interactions in water and those phantoms. Depth-dose and lateral dose profiles of protons in water and in those phantoms were calculated, measured, and compared. Water Equivalent Spreadness (WES) was also investigated for those phantoms using the formula for scattering power ratio. Because WES is independent of incident energy of protons, it is possible to estimate spreadness of protons in different media by just knowing WES. Measurements are usually taken for configuration of the treatment planning system (TPS). This study attempted to achieve commissioning data for uniform scanning proton planning with analytical methods, PLM, which have been verified with published measurements and Monte Carlo calculations. Depth doses and lateral profiles calculated by PLM were compared with measurements via the gamma analysis method. While gamma analysis shows that depth doses are in >90% agreement with measured depth doses, the agreement falls to <80% for some lateral profiles. PLM data were imported into the TPS (PLM-TPS). PLM-TPS was tested with different patient cases. The PLM-TPS treatment plans for 5 prostate cases show acceptable agreement. The Planning Treatment Volume (PTV) coverage was 100 % with PLM-TPS except for one case in

  3. Analytical and numerical performance models of a Heisenberg Vortex Tube

    Science.gov (United States)

    Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.

    2017-12-01

    Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.

  4. Quantum quench dynamics in analytically solvable one-dimensional models

    Science.gov (United States)

    Iucci, Anibal; Cazalilla, Miguel A.; Giamarchi, Thierry

    2008-03-01

    In connection with experiments in cold atomic systems, we consider the non-equilibrium dynamics of some analytically solvable one-dimensional systems which undergo a quantum quench. In this quench one or several of the parameters of the Hamiltonian of an interacting quantum system are changed over a very short time scale. In particular, we concentrate on the Luttinger model and the sine-Gordon model in the Luther-Emery point. For the latter, we show that the order parameter and the two-point correlation function relax in the long time limit to the values determined by a generalized Gibbs ensemble first discussed by J. T. Jaynes [Phys. Rev. 106, 620 (1957); 108, 171 (1957)], and recently conjectured by M. Rigol et.al. [Phys. Rev. Lett. 98, 050405 (2007)] to apply to the non-equilibrium dynamics of integrable systems.

  5. Analytical modeling of mid-infrared silicon Raman lasers

    Science.gov (United States)

    Ma, J.; Fathpour, S.

    2012-01-01

    Silicon photonics has significantly matured in the near-infrared (telecommunication) wavelength range with several commercial products already in the market. More recently, the technology has been extended into the mid-infrared (mid- IR) regime with potential applications in biochemical sensing, tissue photoablation, environmental monitoring and freespace communications. The key advantage of silicon in the mid-IR, as compared with near-IR, is the absence of twophoton absorption (TPA) and free-carrier absorption (FCA). The absence of these nonlinear losses would potentially lead to high-performance nonlinear devices based on Raman and Kerr effects. Also, with the absence of TPA and FCA, the coupled-wave equations that are usually numerically solved to model these nonlinear devices lend themselves to analytical solutions in the mid-IR. In this paper, an analytical model for mid-IR silicon Raman lasers is developed. The validity of the model is confirmed by comparing it with numerical solutions of the coupled-wave equations. The developed model can be used as a versatile and efficient tool for analysis, design and optimization of mid-IR silicon Raman lasers, or to find good initial guesses for numerical methods. The effects of cavity parameters, such as cavity length and facet reflectivities, on the lasing threshold and input-output characteristics of the Raman laser are studied. For instance, for a propagation loss of 0.5 dB/cm, conversion efficiencies as high as 56% is predicted. The predicted optimum cavity (waveguide) length at 2.0 dB/cm propagation loss is { 3.4 mm. The results of this study predict strong prospects for mid-IR silicon Raman lasers for the mentioned applications.

  6. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  7. Standard Model mass spectrum in inflationary universe

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2017-04-11

    We work out the Standard Model (SM) mass spectrum during inflation with quantum corrections, and explore its observable consequences in the squeezed limit of non-Gaussianity. Both non-Higgs and Higgs inflation models are studied in detail. We also illustrate how some inflationary loop diagrams can be computed neatly by Wick-rotating the inflation background to Euclidean signature and by dimensional regularization.

  8. Next to new minimal standard model

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue, Shimane 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University, Sapporo, Hokkaido 060-0810 (Japan); Kaneta, Kunio [Department of Physics, Faculty of Science, Hokkaido University, Sapporo, Hokkaido 060-0810 (Japan); Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Department of Physics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043 (Japan); Takahashi, Ryo [Department of Physics, Faculty of Science, Hokkaido University, Sapporo, Hokkaido 060-0810 (Japan)

    2014-06-27

    We suggest a minimal extension of the standard model, which can explain current experimental data of the dark matter, small neutrino masses and baryon asymmetry of the universe, inflation, and dark energy, and achieve gauge coupling unification. The gauge coupling unification can explain the charge quantization, and be realized by introducing six new fields. We investigate the vacuum stability, coupling perturbativity, and correct dark matter abundance in this model by use of current experimental data.

  9. Standard Model Effective Potential from Trace Anomalies

    Directory of Open Access Journals (Sweden)

    Renata Jora

    2018-01-01

    Full Text Available By analogy with the low energy QCD effective linear sigma model, we construct a standard model effective potential based entirely on the requirement that the tree level and quantum level trace anomalies must be satisfied. We discuss a particular realization of this potential in connection with the Higgs boson mass and Higgs boson effective couplings to two photons and two gluons. We find that this kind of potential may describe well the known phenomenology of the Higgs boson.

  10. Prospects of experimentally reachable beyond Standard Model ...

    Indian Academy of Sciences (India)

    2016-01-06

    Jan 6, 2016 ... Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 2. Prospects of experimentally reachable beyond Standard Model physics in inverse see-saw motivated SO(10) GUT. Ram Lal Awasthi. Special: Supersymmetric Unified Theories and Higgs Physics Volume 86 Issue 2 February 2016 pp 223- ...

  11. Why supersymmetry? Physics beyond the standard model

    Indian Academy of Sciences (India)

    The Naturalness Principle as a requirement that the heavy mass scales decouple from the physics of light mass scales is reviewed. In quantum field theories containing {\\em elementary} scalar fields, such as the StandardModel of electroweak interactions containing the Higgs particle, mass of the scalar field is not a natural ...

  12. Beyond the Standard Model: Working group report

    Indian Academy of Sciences (India)

    55, Nos 1 & 2. — journal of. July & August 2000 physics pp. 307–313. Beyond the Standard Model: Working group report. GAUTAM BHATTACHARYYA. ½ .... action: ¯Consider the possibility that these neutrinos are of Majorana nature, i.e. r η И r , where η И. ¦½. Then the initial condition of degeneracy stated above.

  13. Asymptotically Safe Standard Model via Vectorlike Fermions

    DEFF Research Database (Denmark)

    Mann, R. B.; Meffe, J. R.; Sannino, F.

    2017-01-01

    We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet...

  14. The race to break the standard model

    CERN Multimedia

    Brumfiel, Geoff

    2008-01-01

    The Large Hadron Collider is the latest attempt to move fundamental physics past the frustratingly successful "standard model". But it is not the only way to do it... The author surveys the contenders attempting to capture the prize before the collider gets up to speed.(4 pages)

  15. Why supersymmetry? Physics beyond the standard model

    Indian Academy of Sciences (India)

    2016-08-23

    Aug 23, 2016 ... Abstract. The Naturalness Principle as a requirement that the heavy mass scales decouple from the physics of light mass scales is reviewed. In quantum field theories containing elementary scalar fields, such as the Standard. Model of electroweak interactions containing the Higgs particle, mass of the ...

  16. A hidden analytic structure of the Rabi model

    Energy Technology Data Exchange (ETDEWEB)

    Moroz, Alexander, E-mail: wavescattering@yahoo.com

    2014-01-15

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω{sub c} and a two-level system with a resonance frequency ω{sub 0}. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω{sub 0}/(2ω{sub c})=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads to non-classical discrete orthogonal polynomials ϕ{sub k}(ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ{sub N}(ϵ) of at least the degree N=n+n{sub t}. The value of n{sub t}>0, which is slowly increasing with n, depends on the required precision. For instance, n{sub t}≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x{sub nl}’s of ϕ{sub n}(ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model

  17. Exploring Higher Education Governance: Analytical Models and Heuristic Frameworks

    Directory of Open Access Journals (Sweden)

    Burhan FINDIKLI

    2017-08-01

    Full Text Available Governance in higher education, both at institutional and systemic levels, has experienced substantial changes within recent decades because of a range of world-historical processes such as massification, growth, globalization, marketization, public sector reforms, and the emergence of knowledge economy and society. These developments have made governance arrangements and decision-making processes in higher education more complex and multidimensional more than ever and forced scholars to build new analytical and heuristic tools and strategies to grasp the intricacy and diversity of higher education governance dynamics. This article provides a systematic discussion of how and through which tools prominent scholars of higher education have analyzed governance in this sector by examining certain heuristic frameworks and analytical models. Additionally, the article shows how social scientific analysis of governance in higher education has proceeded in a cumulative way with certain revisions and syntheses rather than radical conceptual and theoretical ruptures from Burton R. Clark’s seminal work to the present, revealing conceptual and empirical junctures between them.

  18. Analytical model of diffuse reflectance spectrum of skin tissue

    Energy Technology Data Exchange (ETDEWEB)

    Lisenko, S A; Kugeiko, M M; Firago, V A [Belarusian State University, Minsk (Belarus); Sobchuk, A N [B.I. Stepanov Institute of Physics, National Academy of Sciences of Belarus, Minsk (Belarus)

    2014-01-31

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions. (biophotonics)

  19. Finite element and analytical models for twisted and coiled actuator

    Science.gov (United States)

    Tang, Xintian; Liu, Yingxiang; Li, Kai; Chen, Weishan; Zhao, Jianguo

    2018-01-01

    Twisted and coiled actuator (TCA) is a class of recently discovered artificial muscle, which is usually made by twisting and coiling polymer fibers into spring-like structures. It has been widely studied since discovery due to its impressive output characteristics and bright prospects. However, its mathematical models describing the actuation in response to the temperature are still not fully developed. It is known that the large tensile stroke is resulted from the untwisting of the twisted fiber when heated. Thus, the recovered torque during untwisting is a key parameter in the mathematical model. This paper presents a simplified model for the recovered torque of TCA. Finite element method is used for evaluating the thermal stress of the twisted fiber. Based on the results of the finite element analyses, the constitutive equations of twisted fibers are simplified to develop an analytic model of the recovered torque. Finally, the model of the recovered torque is used to predict the deformation of TCA under varying temperatures and validated against experimental results. This work will enhance our understanding of the deformation mechanism of TCAs, which will pave the way for the closed-loop position control.

  20. Model choice considerations and information integration using analytical hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Booker, Jane M [BOOKER SCIENTIFIC; Ross, Timothy J. [UNM

    2010-10-15

    Using the theory of information-gap for decision-making under severe uncertainty, it has been shown that model output compared to experimental data contains irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty and confidence-in-prediction. We illustrate a strategy for information integration by gathering and aggregating all available data, knowledge, theory, experience, similar applications. Such integration of information becomes important when the physics is difficult to model, when observational data are sparse or difficult to measure, or both. To aggregate the available information, we take an inference perspective. Models are not rejected, nor wasted, but can be integrated into a final result. We show an example of information integration using Saaty's Analytic Hierarchy Process (AHP), integrating theory, simulation output and experimental data. We used expert elicitation to determine weights for two models and two experimental data sets, by forming pair-wise comparisons between model output and experimental data. In this way we transform epistemic and/or statistical strength from one field of study into another branch of physical application. The price to pay for utilizing all available knowledge is that inferences drawn for the integrated information must be accounted for and the costs can be considerable. Focusing on inferences and inference uncertainty (IU) is one way to understand complex information.

  1. Analytical model of stemwood growth in relation to nitrogen supply

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, R. C.; McMurtrie, R. E. [New South Wales Univ., Sydney, NSW (Australia)

    1996-01-01

    A process-based model of tree stand growth that simulates the effect of nitrogen supply on forest productivity has been recently combined with a soil-carbon-nitrogen model. The combined model, called G`DAY has been used to examine the long-term response of unmanaged forest ecosystems to increasing CO{sub 2} concentration. In this study an attempt was made to derive a simplified, analytically tractable version of the plant production part of G`DAY, and use it to gain insight into the general relationship between stemwood growth and nitrogen supply in managed forests. The particular focus of the study was on using the model to predict how the maximum annual stemwood growth and optimal rotation length can be expected to vary in response to changes in nitrogen supply from net mineralization, fertilizer addition, fixation and atmospheric deposition. Overall, the model was considered to be a useful tool in examining the effects of changes in climate and nutrient supply on sustainable forest productivity. 20 refs., 2 tabs., 5 figs.

  2. Primordial nucleosynthesis: Beyond the standard model

    International Nuclear Information System (INIS)

    Malaney, R.A.

    1991-01-01

    Non-standard primordial nucleosynthesis merits continued study for several reasons. First and foremost are the important implications determined from primordial nucleosynthesis regarding the composition of the matter in the universe. Second, the production and the subsequent observation of the primordial isotopes is the most direct experimental link with the early (t approx-lt 1 sec) universe. Third, studies of primordial nucleosynthesis allow for important, and otherwise unattainable, constraints on many aspects of particle physics. Finally, there is tentative evidence which suggests that the Standard Big Bang (SBB) model is incorrect in that it cannot reproduce the inferred primordial abundances for a single value of the baryon-to-photon ratio. Reviewed here are some aspects of non-standard primordial nucleosynthesis which mostly overlap with the authors own personal interest. He begins with a short discussion of the SBB nucleosynthesis theory, high-lighting some recent related developments. Next he discusses how recent observations of helium and lithium abundances may indicate looming problems for the SBB model. He then discusses how the QCD phase transition, neutrinos, and cosmic strings can influence primordial nucleosynthesis. He concludes with a short discussion of the multitude of other non-standard nucleosynthesis models found in the literature, and make some comments on possible progress in the future. 58 refs., 7 figs., 2 tabs

  3. Analytical Expressions of the Efficiency of Standard and High Contact Ratio Involute Spur Gears

    Directory of Open Access Journals (Sweden)

    Miguel Pleguezuelos

    2013-01-01

    Full Text Available Simple, traditional methods for computation of the efficiency of spur gears are based on the hypotheses of constant friction coefficient and uniform load sharing along the path of contact. However, none of them is accurate. The friction coefficient is variable along the path of contact, though average values can be often considered for preliminary calculations. Nevertheless, the nonuniform load sharing produced by the changing rigidity of the pair of teeth has significant influence on the friction losses, due to the different relative sliding at any contact point. In previous works, the authors obtained a nonuniform model of load distribution based on the minimum elastic potential criterion, which was applied to compute the efficiency of standard gears. In this work, this model of load sharing is applied to study the efficiency of both standard and high contact ratio involute spur gears (with contact ratio between 1 and 2 and greater than 2, resp.. Approximate expressions for the friction power losses and for the efficiency are presented assuming the friction coefficient to be constant along the path of contact. A study of the influence of some transmission parameters (as the gear ratio, pressure angle, etc. on the efficiency is also presented.

  4. Study on Standard Fatigue Vehicle Load Model

    Science.gov (United States)

    Huang, H. Y.; Zhang, J. P.; Li, Y. H.

    2018-02-01

    Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.

  5. The Biosurveillance Analytics Resource Directory (BARD: Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance.

    Directory of Open Access Journals (Sweden)

    Kristen J Margevicius

    Full Text Available Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/, to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.

  6. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance.

    Science.gov (United States)

    Margevicius, Kristen J; Generous, Nicholas; Abeyta, Esteban; Althouse, Ben; Burkom, Howard; Castro, Lauren; Daughton, Ashlynn; Del Valle, Sara Y; Fairchild, Geoffrey; Hyman, James M; Kiang, Richard; Morse, Andrew P; Pancerella, Carmen M; Pullum, Laura; Ramanathan, Arvind; Schlegelmilch, Jeffrey; Scott, Aaron; Taylor-McCabe, Kirsten J; Vespignani, Alessandro; Deshpande, Alina

    2016-01-01

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.

  7. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    We present a design method of homogeneous trench-assisted multicore fibers (TA-MCFs) based on an analytical model utilizing an analytical expression for the mode coupling coefficient between two adjacent cores. The analytical model can also be used for crosstalk (XT) properties analysis, such as ...

  8. Analytical modeling of wet compression of gas turbine systems

    International Nuclear Information System (INIS)

    Kim, Kyoung Hoon; Ko, Hyung-Jong; Perez-Blanco, Horacio

    2011-01-01

    Evaporative gas turbine cycles (EvGT) are of importance to the power generation industry because of the potential of enhanced cycle efficiencies with moderate incremental cost. Humidification of the working fluid to result in evaporative cooling during compression is a key operation in these cycles. Previous simulations of this operation were carried out via numerical integration. The present work is aimed at modeling the wet-compression process with approximate analytical solutions instead. A thermodynamic analysis of the simultaneous heat and mass transfer processes that occur during evaporation is presented. The transient behavior of important variables in wet compression such as droplet diameter, droplet mass, gas and droplet temperature, and evaporation rate is investigated. The effects of system parameters on variables such as droplet evaporation time, compressor outlet temperature and input work are also considered. Results from this work exhibit good agreement with those of previous numerical work.

  9. Analytical Modelling Of Milling For Tool Design And Selection

    Science.gov (United States)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-05-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools.

  10. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  11. An Analytical Model of Joule Heating in Piezoresistive Microcantilevers

    Directory of Open Access Journals (Sweden)

    Chongdu Cho

    2010-11-01

    Full Text Available The present study investigates Joule heating in piezoresistive microcantilever sensors. Joule heating and thermal deflections are a major source of noise in such sensors. This work uses analytical and numerical techniques to characterise the Joule heating in 4-layer piezoresistive microcantilevers made of silicon and silicon dioxide substrates but with the same U-shaped silicon piezoresistor. A theoretical model for predicting the temperature generated due to Joule heating is developed. The commercial finite element software ANSYS Multiphysics was used to study the effect of electrical potential on temperature and deflection produced in the cantilevers. The effect of piezoresistor width on Joule heating is also studied. Results show that Joule heating strongly depends on the applied potential and width of piezoresistor and that a silicon substrate cantilever has better thermal characteristics than a silicon dioxide cantilever.

  12. A hidden analytic structure of the Rabi model

    Science.gov (United States)

    Moroz, Alexander

    2014-01-01

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ωc and a two-level system with a resonance frequency ω0. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω0/(2ωc)=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ɛ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads to non-classical discrete orthogonal polynomials ϕk(ɛ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak's solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕN(ɛ) of at least the degree N=n+nt. The value of nt>0, which is slowly increasing with n, depends on the required precision. For instance, nt≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x's of ϕn(ɛ)'s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1.

  13. Galactic conformity measured in semi-analytic models

    Science.gov (United States)

    Lacerna, I.; Contreras, S.; González, R. E.; Padilla, N.; Gonzalez-Perez, V.

    2018-03-01

    We study the correlation between the specific star formation rate of central galaxies and neighbour galaxies, also known as `galactic conformity', out to 20 h^{-1} {Mpc} using three semi-analytic models (SAMs, one from L-GALAXIES and other two from GALFORM). The aim is to establish whether SAMs are able to show galactic conformity using different models and selection criteria. In all the models, when the selection of primary galaxies is based on an isolation criterion in real space, the mean fraction of quenched (Q) galaxies around Q primary galaxies is higher than that around star-forming primary galaxies of the same stellar mass. The overall signal of conformity decreases when we remove satellites selected as primary galaxies, but the effect is much stronger in GALFORM models compared with the L-GALAXIES model. We find this difference is partially explained by the fact that in GALFORM once a galaxy becomes a satellite remains as such, whereas satellites can become centrals at a later time in L-GALAXIES. The signal of conformity decreases down to 60 per cent in the L-GALAXIES model after removing central galaxies that were ejected from their host halo in the past. Galactic conformity is also influenced by primary galaxies at fixed stellar mass that reside in dark matter haloes of different masses. Finally, we explore a proxy of conformity between distinct haloes. In this case, the conformity is weak beyond ˜3 h^{-1} {Mpc} (conformity is directly related with a long-range effect.

  14. Analytical modelling for ultrasonic surface mechanical attrition treatment

    Directory of Open Access Journals (Sweden)

    Guan-Rong Huang

    2015-07-01

    Full Text Available The grain refinement, gradient structure, fatigue limit, hardness, and tensile strength of metallic materials can be effectively enhanced by ultrasonic surface mechanical attrition treatment (SMAT, however, never before has SMAT been treated with rigorous analytical modelling such as the connection among the input energy and power and resultant temperature of metallic materials subjected to SMAT. Therefore, a systematic SMAT model is actually needed. In this article, we have calculated the averaged speed, duration time of a cycle, kinetic energy and kinetic energy loss of flying balls in SMAT for structural metallic materials. The connection among the quantities such as the frequency and amplitude of attrition ultrasonic vibration motor, the diameter, mass and density of balls, the sample mass, and the height of chamber have been considered and modelled in details. And we have introduced the one-dimensional heat equation with heat source within uniform-distributed depth in estimating the temperature distribution and heat energy of sample. In this approach, there exists a condition for the frequency of flying balls reaching a steady speed. With these known quantities, we can estimate the strain rate, hardness, and grain size of sample.

  15. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  16. Green Transport Balanced Scorecard Model with Analytic Network Process Support

    Directory of Open Access Journals (Sweden)

    David Staš

    2015-11-01

    Full Text Available In recent decades, the performance of economic and non-economic activities has required them to be friendly with the environment. Transport is one of the areas having considerable potential within the scope. The main assumption to achieve ambitious green goals is an effective green transport evaluation system. However, these systems are researched from the industrial company and supply chain perspective only sporadically. The aim of the paper is to design a conceptual framework for creating the Green Transport (GT Balanced Scorecard (BSC models from the viewpoint of industrial companies and supply chains using an appropriate multi-criteria decision making method. The models should allow green transport performance evaluation and support of an effective implementation of green transport strategies. Since performance measures used in Balanced Scorecard models are interdependent, the Analytic Network Process (ANP was used as the appropriate multi-criteria decision making method. The verification of the designed conceptual framework was performed on a real supply chain of the European automotive industry.

  17. Superconnections: an interpretation of the standard model

    Directory of Open Access Journals (Sweden)

    Gert Roepstorff

    2000-07-01

    Full Text Available The mathematical framework of superbundles as pioneered by D. Quillen suggests that one consider the Higgs field as a natural constituent of a superconnection. I propose to take as superbundle the exterior algebra obtained from a Hermitian vector bundle of rank n where n=2 for the electroweak theory and n=5 for the full Standard Model. The present setup is similar to but avoids the use of non-commutative geometry.

  18. Status of the electroweak standard model

    International Nuclear Information System (INIS)

    Haidt, D.

    1990-01-01

    It is the aim of this report to confront the results extracted from the experiments in each sector with the electroweak standard model in its minimal form (QFD), to search for internal inconsistencies and, if not found, to obtain best values for the electroweak couplings together with constraints on the as yet unobserved top quark. The e + e - data of the three TRISTAN experiments, even though partly preliminary, are now systematically included in the fits. (orig./HSI)

  19. Indoorgml - a Standard for Indoor Spatial Modeling

    Science.gov (United States)

    Li, Ki-Joune

    2016-06-01

    With recent progress of mobile devices and indoor positioning technologies, it becomes possible to provide location-based services in indoor space as well as outdoor space. It is in a seamless way between indoor and outdoor spaces or in an independent way only for indoor space. However, we cannot simply apply spatial models developed for outdoor space to indoor space due to their differences. For example, coordinate reference systems are employed to indicate a specific position in outdoor space, while the location in indoor space is rather specified by cell number such as room number. Unlike outdoor space, the distance between two points in indoor space is not determined by the length of the straight line but the constraints given by indoor components such as walls, stairs, and doors. For this reason, we need to establish a new framework for indoor space from fundamental theoretical basis, indoor spatial data models, and information systems to store, manage, and analyse indoor spatial data. In order to provide this framework, an international standard, called IndoorGML has been developed and published by OGC (Open Geospatial Consortium). This standard is based on a cellular notion of space, which considers an indoor space as a set of non-overlapping cells. It consists of two types of modules; core module and extension module. While core module consists of four basic conceptual and implementation modeling components (geometric model for cell, topology between cells, semantic model of cell, and multi-layered space model), extension modules may be defined on the top of the core module to support an application area. As the first version of the standard, we provide an extension for indoor navigation.

  20. Beyond the standard model in many directions

    Energy Technology Data Exchange (ETDEWEB)

    Chris Quigg

    2004-04-28

    These four lectures constitute a gentle introduction to what may lie beyond the standard model of quarks and leptons interacting through SU(3){sub c} {direct_product} SU(2){sub L} {direct_product} U(1){sub Y} gauge bosons, prepared for an audience of graduate students in experimental particle physics. In the first lecture, I introduce a novel graphical representation of the particles and interactions, the double simplex, to elicit questions that motivate our interest in physics beyond the standard model, without recourse to equations and formalism. Lecture 2 is devoted to a short review of the current status of the standard model, especially the electroweak theory, which serves as the point of departure for our explorations. The third lecture is concerned with unified theories of the strong, weak, and electromagnetic interactions. In the fourth lecture, I survey some attempts to extend and complete the electroweak theory, emphasizing some of the promise and challenges of supersymmetry. A short concluding section looks forward.

  1. Standard Model backgrounds to supersymmetry searches

    CERN Document Server

    Mangano, Michelangelo L

    2009-01-01

    This work presents a review of the Standard Model sources of backgrounds to the search of supersymmetry signals. Depending on the specific model, typical signals may include jets, leptons, and missing transverse energy due to the escaping lightest supersymmetric particle. We focus on the simplest case of multijets and missing energy, since this allows us to expose most of the issues common to other more complex cases. The review is not exhaustive, and is aimed at collecting a series of general comments and observations, to serve as guideline for the process that will lead to a complete experimental determination of size and features of such SM processes.

  2. Analytical model for local scour prediction around hydrokinetic turbine foundations

    Science.gov (United States)

    Musa, M.; Heisel, M.; Hill, C.; Guala, M.

    2017-12-01

    Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi

  3. Simple analytical model of evapotranspiration in the presence of roots.

    Science.gov (United States)

    Cejas, Cesare M; Hough, L A; Castaing, Jean-Christophe; Frétigny, Christian; Dreyfus, Rémi

    2014-10-01

    Evaporation of water out of a soil involves complicated and well-debated mechanisms. When plant roots are added into the soil, water transfer between the soil and the outside environment is even more complicated. Indeed, plants provide an additional process of water transfer. Water is pumped by the roots, channeled to the leaf surface, and released into the surrounding air by a process called transpiration. Prediction of the evapotranspiration of water over time in the presence of roots helps keep track of the amount of water that remains in the soil. Using a controlled visual setup of a two-dimensional model soil consisting of monodisperse glass beads, we perform experiments on actual roots grown under different relative humidity conditions. We record the total water mass loss in the medium and the position of the evaporating front that forms within the medium. We then develop a simple analytical model that predicts the position of the evaporating front as a function of time as well as the total amount of water that is lost from the medium due to the combined effects of evaporation and transpiration. The model is based on fundamental principles of evaporation fluxes and includes empirical assumptions on the quantity of open stomata in the leaves, where water transpiration occurs. Comparison between the model and experimental results shows excellent prediction of the position of the evaporating front as well as the total mass loss from evapotranspiration in the presence of roots. The model also provides a way to predict the lifetime of a plant.

  4. Analytical Evaluation of Preliminary Drop Tests Performed to Develop a Robust Design for the Standardized DOE Spent Nuclear Fuel Canister

    International Nuclear Information System (INIS)

    Ware, A.G.; Morton, D.K.; Smith, N.L.; Snow, S.D.; Rahl, T.E.

    1999-01-01

    The Department of Energy (DOE) has developed a design concept for a set of standard canisters for the handling, interim storage, transportation, and disposal in the national repository, of DOE spent nuclear fuel (SNF). The standardized DOE SNF canister has to be capable of handling virtually all of the DOE SNF in a variety of potential storage and transportation systems. It must also be acceptable to the repository, based on current and anticipated future requirements. This expected usage mandates a robust design. The canister design has four unique geometries, with lengths of approximately 10 feet or 15 feet, and an outside nominal diameter of 18 inches or 24 inches. The canister has been developed to withstand a drop from 30 feet onto a rigid (flat) surface, sustaining only minor damage - but no rupture - to the pressure (containment) boundary. The majority of the end drop-induced damage is confined to the skirt and lifting/stiffening ring components, which can be removed if de sired after an accidental drop. A canister, with its skirt and stiffening ring removed after an accidental drop, can continue to be used in service with appropriate operational steps being taken. Features of the design concept have been proven through drop testing and finite element analyses of smaller test specimens. Finite element analyses also validated the canister design for drops onto a rigid (flat) surface for a variety of canister orientations at impact, from vertical to 45 degrees off vertical. Actual 30-foot drop testing has also been performed to verify the final design, though limited to just two full-scale test canister drops. In each case, the analytical models accurately predicted the canister response

  5. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  6. Analytical model of neutral gas shielding for hydrogen pellet ablation

    Energy Technology Data Exchange (ETDEWEB)

    Kuteev, Boris V.; Tsendin, Lev D. [State Technical Univ., St. Petersburg (Russian Federation)

    2001-11-01

    A kinetic gasdynamic scaling for hydrogen pellet ablation is obtained in terms of a neural gas shielding model using both numerical and analytical approaches. The scaling on plasma and pellet parameters proposed in the monoenergy approximation by Milora and Foster dR{sub pe}/dt{approx}S{sub n}{sup 2/3}R{sub p}{sup -2/3}q{sub eo}{sup 1/3}m{sub i}{sup -1/3} is confirmed. Here R{sub p} is the pellet radius, S{sub n} is the optical thickness of a cloud, q{sub eo} is the electron energy flux density and m{sub i} is the molecular mass. Only the numeral factor is approximately two times less than that for the monoenergy approach. Due to this effect, the pellet ablation rates, which were obtained by Kuteev on the basis of the Milora scaling, should be reduced by a factor of 1.7. Such a modification provides a reasonable agreement (even at high plasma parameters) between the two-dimensional kinetic model and the one-dimensional monoenergy approximation validated in contemporary tokamak experiments. As the could (in the kinetic approximation) is significantly thicker than that for the monoenergy case as well as the velocities of the gas flow are much slower, the relative effect of plasma and magnetic shielding on the ablation rate is strongly reduced. (author)

  7. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  8. Analytical model for an electrostatically actuated miniature diaphragm compressor

    International Nuclear Information System (INIS)

    Sathe, Abhijit A; Groll, Eckhard A; Garimella, Suresh V

    2008-01-01

    This paper presents a new analytical approach for quasi-static modeling of an electrostatically actuated diaphragm compressor that could be employed in a miniature scale refrigeration system. The compressor consists of a flexible circular diaphragm clamped at its circumference. A conformal chamber encloses the diaphragm completely. The membrane and the chamber surfaces are coated with metallic electrodes. A potential difference applied between the diaphragm and the chamber pulls the diaphragm toward the chamber surface progressively from the outer circumference toward the center. This zipping actuation reduces the volume available to the refrigerant gas, thereby increasing its pressure. A segmentation technique is proposed for analysis of the compressor by which the domain is divided into multiple segments for each of which the forces acting on the diaphragm are estimated. The pull-down voltage to completely zip each individual segment is thus obtained. The required voltage for obtaining a specific pressure rise in the chamber can thus be determined. Predictions from the model compare well with other simulation results from the literature, as well as to experimental measurements of the diaphragm displacement and chamber pressure rise in a custom-built setup

  9. New analytical solution for pyle-popovich's peritoneal dialysis model

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Hiroyuki; Sakiyama, Ryoichi; Okamoto, Masahiro; Tojo, Kakuji [Kyushi Institute of Technology, Fukuoka (Japan); Yamashita, Akihiro [Shonan Institute of Technology, Kanagwa (Japan)

    1999-08-01

    Continuous Ambulatory Peritoneal Dialysis (CAPD) is one of the standard treatments for kidney disease patients. A washing solution, called dialysate, is put into the peritoneal cavity to remove waste products and excess amounts of water in CAPD. The dialysate is exchanged four to five times a day by the patient. However, it is not easy to prescribe CAPD therapy, which may have precluded popularization of CAPD therapy. Popovich et al. constructed a mathematical model (P-P model) that applies to the prescription of the treatment schedule. It requires, however, a number of iterative calculations to obtain a exact numerical solution because the model is a set of nonlinear simultaneous ordinary differential equations. In this paper, the authors derived a new approximated analytical solution by employing a time-discrete technique, assuming all the parameters to be constant within each piecewise period of time for the P-P model. We have also described an algorithm of a numerical calculation with the new solution for clinical use with another analytical solution (Vonesh's solution). The new analytical solution consists of a forward solution (FW solution). The new analytical solution consists of a forward solution (FW solution), that is the solution for the plasma and dialysate concentrations from t{sub i} to t{sub i+1}(t{sub i}analytical solution show an excellent agreement with the exact numerical solution for entire dwelling time. Moreover, optimized parameters with the new analytical solution show much smaller discrepancy than those with Vonesh's solution. Although the proposed method requires a slightly longer calculation time than Vonesh's it can simulate concentrations in

  10. Orthogonal analytical methods for botanical standardization: Determination of green tea catechins by qNMR and LC-MS/MS

    OpenAIRE

    Napolitano, José G.; Gödecke, Tanja; Lankin, David C.; Jaki, Birgit U.; McAlpine, James B.; Chen, Shao-Nong; Pauli, Guido F.

    2013-01-01

    The development of analytical methods for parallel characterization of multiple phytoconstituents is essential to advance the quality control of herbal products. While chemical standardization is commonly carried out by targeted analysis using gas or liquid chromatography-based methods, more universal approaches based on quantitative 1H NMR (qHNMR) measurements are being used increasingly in the multi-targeted assessment of these complex mixtures. The present study describes the development o...

  11. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  12. Study on bird's & insect's wing aerodynamics and comparison of its analytical value with standard airfoil

    Science.gov (United States)

    Ali, Md. Nesar; Alam, Mahbubul; Hossain, Md. Abed; Ahmed, Md. Imteaz

    2017-06-01

    by several species of birds. Hovering, which is generating only lift through flapping alone rather than as a product of thrust, demands a lot of energy. On the other hand, for practical knowledge we also fabricate the various bird's, insect's & fighter jet wing by using random value of parameter & test those airfoil in wind tunnel. Finally for comparison & achieving analytical knowledge we also test those airfoil model in various simulation software.

  13. Skewness of the standard model possible implications

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1989-09-01

    In this paper we consider combinations of gauge algebra and set of rules for quantization of gauge charges. We show that the combination of the algebra of the standard model and the rule satisfied by the electric charges of the quarks and leptons has an exceptional high degree of a kind of asymmetry which we call skewness. Assuming that skewness has physical significance and adding two other rather plausible assumptions, we may conclude that space time must have a non simply connected topology on very small distances. Such topology would allow a kind of symmetry breakdown leading to a more skew combination of gauge algebra and set of quantization rules. (orig.)

  14. Non standard analysis, polymer models, quantum fields

    International Nuclear Information System (INIS)

    Albeverio, S.

    1984-01-01

    We give an elementary introduction to non standard analysis and its applications to the theory of stochastic processes. This is based on a joint book with J.E. Fenstad, R. Hoeegh-Krohn and T. Lindstroeem. In particular we give a discussion of an hyperfinite theory of Dirichlet forms with applications to the study of the Hamiltonian for a quantum mechanical particle in the potential created by a polymer. We also discuss new results on the existence of attractive polymer measures in dimension d 1 2 phi 2 2 )sub(d)-model of interacting quantum fields. (orig.)

  15. Search for the standard model Higgs boson

    Science.gov (United States)

    Buskulic, D.; de Bonis, I.; Decamp, D.; Ghez, P.; Goy, C.; Lees, J.-P.; Minard, M.-N.; Pietrzyk, B.; Ariztizabal, F.; Comas, P.; Crespo, J. M.; Delfino, M.; Efthymiopoulos, I.; Fernandez, E.; Fernandez-Bosman, M.; Gaitan, V.; Garrido, Ll.; Mattison, T.; Pacheco, A.; Padilla, C.; Pascual, A.; Creanza, D.; de Palma, M.; Farilla, A.; Iaselli, G.; Maggi, G.; Natali, S.; Nuzzo, S.; Quattromini, M.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Chai, Y.; Hu, H.; Huang, D.; Huang, X.; Lin, J.; Wang, T.; Xie, Y.; Xu, D.; Xu, R.; Zhang, J.; Zhang, L.; Zhao, W.; Blucher, E.; Bonvicini, G.; Boudreau, J.; Casper, D.; Drevermann, H.; Forty, R. W.; Ganis, G.; Gay, C.; Hagelberg, R.; Harvey, J.; Hilgart, J.; Jacobsen, R.; Jost, B.; Knobloch, J.; Lehraus, I.; Lohse, T.; Maggi, M.; Markou, C.; Martinez, M.; Mato, P.; Meinhard, H.; Minten, A.; Miotto, A.; Miguel, R.; Moser, H.-G.; Palazzi, P.; Pater, J. R.; Perlas, J. A.; Pusztaszeri, J.-F.; Ranjard, F.; Redlinger, G.; Rolandi, L.; Rothberg, J.; Ruan, T.; Saich, M.; Schlatter, D.; Schmelling, M.; Sefkow, F.; Tejessy, W.; Tomalin, I. R.; Veenhof, R.; Wachsmuth, H.; Wasserbaech, S.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Badaud, F.; Bardadin-Otwinowska, M.; El Fellous, R.; Falvard, A.; Gay, P.; Guicheney, C.; Henrard, P.; Jousset, J.; Michel, B.; Montret, J.-C.; Pallin, D.; Perret, P.; Podlyski, F.; Proriol, J.; Prulhière, F.; Saadi, F.; Fearnley, T.; Hansen, J. B.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Møllerud, R.; Nilsson, B. S.; Kyriakis, A.; Simopoulou, E.; Siotis, I.; Vayaki, A.; Zachariadou, K.; Badier, J.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Fouque, G.; Orteu, S.; Rougé, A.; Rumpf, M.; Tanaka, R.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Veitch, E.; Focardi, E.; Moneta, L.; Parrini, G.; Corden, M.; Georgiopoulos, C.; Ikeda, M.; Levinthal, D.; Antonelli, A.; Baldini, R.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Cerutti, F.; Chiarella, V.; D'Ettorre-Piazzoli, B.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Picchi, P.; Colrain, P.; Ten Have, I.; Lynch, J. G.; Maitland, W.; Morton, W. T.; Raine, C.; Reeves, P.; Scarr, J. M.; Smith, K.; Thompson, A. S.; Turnbull, R. M.; Brandl, B.; Braun, O.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Maumary, Y.; Putzer, A.; Rensch, B.; Stahl, A.; Tittel, K.; Wunsch, M.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Cattaneo, M.; Colling, D. J.; Dornan, P. J.; Greene, A. M.; Hassard, J. F.; Lieske, N. M.; Moutoussi, A.; Nash, J.; Patton, S.; Payne, D. G.; Phillips, M. J.; San Martin, G.; Sedgbeer, J. K.; Wright, A. G.; Girtler, P.; Kuhn, D.; Rudolph, G.; Vogl, R.; Bowdery, C. K.; Brodbeck, T. J.; Finch, A. J.; Foster, F.; Hughes, G.; Jackson, D.; Keemer, N. R.; Nuttall, M.; Patel, A.; Sloan, T.; Snow, S. W.; Whelan, E. P.; Kleinknecht, K.; Raab, J.; Renk, B.; Sander, H.-G.; Schmidt, H.; Steeg, F.; Walther, S. M.; Wanke, R.; Wolf, B.; Bencheikh, A. M.; Benchouk, C.; Bonissent, A.; Carr, J.; Coyle, P.; Drinkard, J.; Etienne, F.; Nicod, D.; Papalexiou, S.; Payre, P.; Roos, L.; Rousseau, D.; Schwemling, P.; Talby, M.; Adlung, S.; Assmann, R.; Bauer, C.; Blum, W.; Brown, D.; Cattaneo, P.; Dehning, B.; Dietl, H.; Dydak, F.; Frank, M.; Halley, A. W.; Jakobs, K.; Lauber, J.; Lütjens, G.; Lutz, G.; Männer, W.; Richter, R.; Schröder, J.; Schwarz, A. S.; Settles, R.; Seywerd, H.; Stierlin, U.; Stiegler, U.; Dennis, R. St.; Wolf, G.; Alemany, R.; Boucrot, J.; Callot, O.; Cordier, A.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jaffe, D. E.; Janot, P.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Schune, M.-H.; Veillet, J.-J.; Videau, I.; Zhang, Z.; Abbaneo, D.; Bagliesi, G.; Batignani, G.; Bottigli, U.; Bozzi, C.; Calderini, G.; Carpinelli, M.; Ciocci, M. A.; Dell'Orso, R.; Ferrante, I.; Fidecaro, F.; Foà, L.; Forti, F.; Giassi, A.; Giorgi, M. A.; Gregorio, A.; Ligabue, F.; Lusiani, A.; Manneli, E. B.; Marrocchesi, P. S.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Spagnolo, P.; Steinberger, J.; Techini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Venturi, A.; Verdini, P. G.; Walsh, J.; Betteridge, A. P.; Gao, Y.; Green, M. G.; March, P. V.; Mir, Ll. M.; Medcalf, T.; Quazi, I. S.; Strong, J. A.; West, L. R.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Haywood, S.; Norton, P. R.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Emery, S.; Kozanecki, W.; Lançon, E.; Lemaire, M. C.; Locci, E.; Marx, B.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Vallage, B.; Johnson, R. P.; Litke, A. M.; Taylor, G.; Wear, J.; Ashman, J. G.; Babbage, W.; Booth, C. N.; Buttar, C.; Cartwright, S.; Combley, F.; Dawson, I.; Thompson, L. F.; Barberio, E.; Böhrer, A.; Brandt, S.; Cowan, G.; Grupen, C.; Lutters, G.; Rivera, F.; Schäfer, U.; Smolik, L.; Bosisio, L.; Della Marina, R.; Giannini, G.; Gobbo, B.; Ragusa, F.; Bellantoni, L.; Chen, W.; Conway, J. S.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; Grahl, J.; Harton, J. L.; Hayes, O. J.; Nachtman, J. M.; Pan, Y. B.; Saadi, Y.; Schmitt, M.; Scott, I.; Sharma, V.; Shi, Z. H.; Turk, J. D.; Walsh, A. M.; Weber, F. V.; Sau Lan Wu; Wu, X.; Zheng, M.; Zobernig, G.; Aleph Collaboration

    1993-08-01

    Using a data sample corresponding to about 1 233 000 hadronic Z decays collected by the ALEPH experiment at LEP, the reaction e+e- → HZ∗ has been used to search for the standard model Higgs boson, in association with missing energy when Z∗ → v v¯, or with a pair of energetic leptons when Z∗ → e+e-or μ +μ -. No signal was found and, at the 95% confidence level, mH exceeds 58.4 GeV/ c2.

  16. Small-scale engagement model with arrivals: analytical solutions

    International Nuclear Information System (INIS)

    Engi, D.

    1977-04-01

    This report presents an analytical model of small-scale battles. The specific impetus for this effort was provided by a need to characterize hypothetical battles between guards at a nuclear facility and their potential adversaries. The solution procedure can be used to find measures of a number of critical parameters; for example, the win probabilities and the expected duration of the battle. Numerical solutions are obtainable if the total number of individual combatants on the opposing sides is less than 10. For smaller force size battles, with one or two combatants on each side, symbolic solutions can be found. The symbolic solutions express the output parameters abstractly in terms of symbolic representations of the input parameters while the numerical solutions are expressed as numerical values. The input parameters are derived from the probability distributions of the attrition and arrival processes. The solution procedure reduces to solving sets of linear equations that have been constructed from the input parameters. The approach presented in this report does not address the problems associated with measuring the inputs. Rather, this report attempts to establish a relatively simple structure within which small-scale battles can be studied

  17. Analytical and numerical models of transport in porous cementitious materials

    International Nuclear Information System (INIS)

    Garboczi, E.J.; Bentz, D.P.

    1990-01-01

    Most chemical and physical processes that degrade cementitious materials are dependent on an external source of either water or ions or both. Understanding the rates of these processes at the microstructural level is necessary in order to develop a sound scientific basis for the prediction and control of the service life of cement-based materials, especially for radioactive-waste containment materials that are required to have service lives on the order of hundreds of years. An important step in developing this knowledge is to understand how transport coefficients, such as diffusivity and permeability, depend on the pore structure. Fluid flow under applied pressure gradients and ionic diffusion under applied concentration gradients are important transport mechanisms that take place in the pore space of cementitious materials. This paper describes: (1) a new analytical percolation-theory-based equation for calculating the permeability of porous materials, (2) new computational methods for computing effective diffusivities of microstructural models or digitized images of actual porous materials, and (3) a new digitized-image mercury intrusion simulation technique

  18. Using visual analytics model for pattern matching in surveillance data

    Science.gov (United States)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  19. An analytical study of various telecomminication networks using markov models

    Science.gov (United States)

    Ramakrishnan, M.; Jayamani, E.; Ezhumalai, P.

    2015-04-01

    The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model.

  20. Analytical models for total dose ionization effects in MOS devices.

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Phillip Montgomery; Bogdan, Carolyn W.

    2008-08-01

    MOS devices are susceptible to damage by ionizing radiation due to charge buildup in gate, field and SOI buried oxides. Under positive bias holes created in the gate oxide will transport to the Si / SiO{sub 2} interface creating oxide-trapped charge. As a result of hole transport and trapping, hydrogen is liberated in the oxide which can create interface-trapped charge. The trapped charge will affect the threshold voltage and degrade the channel mobility. Neutralization of oxidetrapped charge by electron tunneling from the silicon and by thermal emission can take place over long periods of time. Neutralization of interface-trapped charge is not observed at room temperature. Analytical models are developed that account for the principal effects of total dose in MOS devices under different gate bias. The intent is to obtain closed-form solutions that can be used in circuit simulation. Expressions are derived for the aging effects of very low dose rate radiation over long time periods.

  1. Model independent determination of colloidal silica size distributions via analytical ultracentrifugation

    NARCIS (Netherlands)

    Planken, K.L.|info:eu-repo/dai/nl/304841099; Kuipers, B.W.M.|info:eu-repo/dai/nl/304841110; Philipse, A.P.|info:eu-repo/dai/nl/073532894

    2008-01-01

    We report a method to determine the particle size distribution of small colloidal silica spheres via analytical ultracentrifugation and show that the average particle size, variance, standard deviation, and relative polydispersity can be obtained from a single sedimentation velocity (SV) analytical

  2. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  3. Standard guide for establishing a quality assurance program for analytical chemistry laboratories within the nuclear industry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This guide covers the establishment of a quality assurance (QA) program for analytical chemistry laboratories within the nuclear industry. Reference to key elements of ANSI/ISO/ASQC Q9001, Quality Systems, provides guidance to the functional aspects of analytical laboratory operation. When implemented as recommended, the practices presented in this guide will provide a comprehensive QA program for the laboratory. The practices are grouped by functions, which constitute the basic elements of a laboratory QA program. 1.2 The essential, basic elements of a laboratory QA program appear in the following order: Section Organization 5 Quality Assurance Program 6 Training and Qualification 7 Procedures 8 Laboratory Records 9 Control of Records 10 Control of Procurement 11 Control of Measuring Equipment and Materials 12 Control of Measurements 13 Deficiencies and Corrective Actions 14

  4. B physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Hewett, J.A.L.

    1997-12-01

    The ability of present and future experiments to test the Standard Model in the B meson sector is described. The authors examine the loop effects of new interactions in flavor changing neutral current B decays and in Z → b anti b, concentrating on supersymmetry and the left-right symmetric model as specific examples of new physics scenarios. The procedure for performing a global fit to the Wilson coefficients which describe b → s transitions is outlined, and the results of such a fit from Monte Carlo generated data is compared to the predictions of the two sample new physics scenarios. A fit to the Zb anti b couplings from present data is also given

  5. Complex singlet extension of the standard model

    International Nuclear Information System (INIS)

    Barger, V.; Langacker, P.; McCaskey, M.; Ramsey-Musolf, M.; Shaughnessy, G.

    2009-01-01

    We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider

  6. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  7. Ibm spss modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  8. Determining passive cooling limits in CPV using an analytical thermal model

    Science.gov (United States)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  9. Analytical model of stemwood growth in relation to nitrogen supply.

    Science.gov (United States)

    Dewar, Roderick C.; McMurtrie, Ross E.

    1996-01-01

    We derived a simplified version of a previously published process-based model of forest productivity and used it to gain information about the dependence of stemwood growth on nitrogen supply. The simplifications we made led to the following general expression for stemwood carbon (c(w)) as a function of stand age (t), which shows explicitly the main factors involved: c(w)(t) = eta(w)G*/ micro (w)(1 - lambdae(- micro (w)t) - micro (w)e(-lambdat)/lambda - micro (w)), where eta(w) is the fraction of total carbon production (G) allocated to stemwood, G* is the equilibrium value of G at canopy closure, lambda describes the rate at which G approaches G*, and micro (w) is the combined specific rate of stemwood maintenance respiration and senescence. According to this equation, which describes a sigmoidal growth curve, c(w) is zero initially and asymptotically approaches eta(w)G*/ micro (w) with the rate of approach dependent on lambda and micro (w). We used this result to derive corresponding expressions for the maximum mean annual stem-wood volume increment (Y) and optimal rotation length (T). By calculating the quantities G* and lambda (which characterize the variation of carbon production with stand age) as functions of the supply rate of plant-available nitrogen (U(o)), we estimated the responses of Y and T to changes in U(o). For a plausible set of parameter values, as U(o) increased from 50 to 150 kg N ha(-1) year(-1), Y increased approximately linearly from 8 to 25 m(3) ha(-1) year(-1) (mainly as a result of increasing G*), whereas T decreased from 21 to 18 years (due to increasing lambda). The sensitivity of Y and T to other model parameters was also investigated. The analytical model provides a useful basis for examining the effects of changes in climate and nutrient supply on sustainable forest productivity, and may also help in interpreting the behavior of more complex process-based models of forest growth.

  10. A Meta-Analytic Assessment of Empirical Differences in Standard Setting Procedures.

    Science.gov (United States)

    Bontempo, Brian D.; Marks, Casimer M.; Karabatsos, George

    Using meta-analysis, this research takes a look at studies included in a meta-analysis by R. Jaeger (1989) that compared the cut score set by one standard setting method with that set by another. This meta-analysis looks beyond Jaeger's studies to select 10 from the research literature. Each compared at least two types of standard setting method.…

  11. Background and derivation of ANS-5.4 standard fission product release model. Technical report

    International Nuclear Information System (INIS)

    1982-01-01

    ANS Working Group 5.4 was established in 1974 to examine fission product releases from UO2 fuel. The scope of ANS-5.4 was narrowly defined to include the following: (1) Review available experimental data on release of volatile fission products from UO2 and mixed-oxide fuel; (2) Survey existing analytical models currently being applied to lightwater reactors; and (3) Develop a standard analytical model for volatile fission product release to the fuel rod void space. Place emphasis on obtaining a model for radioactive fission product releases to be used in assessing radiological consequences of postulated accidents

  12. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    Science.gov (United States)

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  13. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  14. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  15. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  16. THE ELUDATION OF A CRITIC ON THE CORE-PERIPHERY MODEL - THE ANALYTIC SOLUTION –

    Directory of Open Access Journals (Sweden)

    Viorica PUSCACIU

    2005-01-01

    Full Text Available One of the major critiques of Paul Krugman’s standard core-periphery model (1991, which modelforms the bottom of so called ‘The New Economic Geography’, is the impossibility of his analyticsolution, reason for which difficult digital simulations are enforced. Getting a function and ananalytic solution enables a better description of the process of agglomeration and at the same time,the presentation of the authors, with title of novelty, of graphics on the logarithmic scale, whichaligns as a complementary form to the traditional ones. The adaptation and the presentation of themodel with a software help, as Maple computer program, permits also the complete understanding ofthe solvable analytic model.

  17. Advanced analytical modeling of double-gate Tunnel-FETs - A performance evaluation

    Science.gov (United States)

    Graef, Michael; Hosenfeld, Fabian; Horst, Fabian; Farokhnejad, Atieh; Hain, Franziska; Iñíguez, Benjamín; Kloes, Alexander

    2018-03-01

    The Tunnel-FET is one of the most promising devices to be the successor of the standard MOSFET due to its alternative current transport mechanism, which allows a smaller subthreshold slope than the physically limited 60 mV/dec of the MOSFET. Recently fabricated devices show smaller slopes already but mostly not over multiple decades of the current transfer characteristics. In this paper the performance limiting effects, occurring during the fabrication process of the device, such as doping profiles and midgap traps are analyzed by physics-based analytical models and their performance limiting abilities are determined. Additionally, performance enhancing possibilities, such as hetero-structures and ambipolarity improvements are introduced and discussed. An extensive double-gate n-Tunnel-FET model is presented, which meets the versatile device requirements and shows a good fit with TCAD simulations and measurement data.

  18. [Preparation of sub-standard samples and XRF analytical method of powder non-metallic minerals].

    Science.gov (United States)

    Kong, Qin; Chen, Lei; Wang, Ling

    2012-05-01

    In order to solve the problem that standard samples of non-metallic minerals are not satisfactory in practical work by X-ray fluorescence spectrometer (XRF) analysis with pressed powder pellet, a method was studied how to make sub-standard samples according to standard samples of non-metallic minerals and to determine how they can adapt to analysis of mineral powder samples, taking the K-feldspar ore in Ebian-Wudu, Sichuan as an example. Based on the characteristic analysis of K-feldspar ore and the standard samples by X-ray diffraction (XRD) and chemical methods, combined with the principle of the same or similar between the sub-standard samples and unknown samples, the experiment developed the method of preparation of sub-standard samples: both of the two samples above mentioned should have the same kind of minerals and the similar chemical components, adapt mineral processing, and benefit making working curve. Under the optimum experimental conditions, a method for determination of SiO2, Al2O3, Fe2O3, TiO2, CaO, MgO, K2O and Na2O of K-feldspar ore by XRF was established. Thedetermination results are in good agreement with classical chemical methods, which indicates that this method was accurate.

  19. In-core LOCA-s: analytical solution for the delayed mixing model for moderator poison concentration

    International Nuclear Information System (INIS)

    Firla, A.P.

    1995-01-01

    Solutions to dynamic moderator poison concentration model with delayed mixing under single pressure tube / calandria tube rupture scenario are discussed. Such a model is described by a delay differential equation, and for such equations the standard ways of solution are not directly applicable. In the paper an exact, direct time-domain analytical solution to the delayed mixing model is presented and discussed. The obtained solution has a 'marching' form and is easy to calculate numerically. Results of the numerical calculations based on the analytical solution indicate that for the expected range of mixing times the existing uniform mixing model is a good representation of the moderator poison mixing process for single PT/CT breaks. However, for postulated multi-pipe breaks ( which is very unlikely to occur ) the uniform mixing model is not adequate any more; at the same time an 'approximate' solution based on Laplace transform significantly overpredicts the rate of poison concentration decrease, resulting in excessive increase in the moderator dilution factor. In this situation the true, analytical solution must be used. The analytical solution presented in the paper may also serve as a bench-mark test for the accuracy of the existing poison mixing models. Moreover, because of the existing oscillatory tendency of the solution, special care must be taken in using delay differential models in other applications. (author). 3 refs., 3 tabs., 8 figs

  20. [Standardization and modeling of surgical processes].

    Science.gov (United States)

    Strauss, G; Schmitz, P

    2016-12-01

    Due to the technological developments around the operating room, surgery in the twenty-first century is undergoing a paradigm shift. Which technologies have already been integrated into the surgical routine? How can a favorable cost-benefit balance be achieved by the implementation of new software-based assistance systems? This article presents the state of the art technology as exemplified by a semi-automated operation system for otorhinolaryngology surgery. The main focus is on systems for implementation of digital handbooks and navigational functions in situ. On the basis of continuous development in digital imaging, decisions may by facilitated by individual patient models thus allowing procedures to be optimized. The ongoing digitization and linking of all relevant information enable a high level of standardization in terms of operating procedures. This may be used by assistance systems as a basis for complete documentation and high process reliability. Automation of processes in the operating room results in an increase in quality, precision and standardization so that the effectiveness and efficiency of treatment can be improved; however, care must be taken that detrimental consequences, such as loss of skills and placing too much faith in technology must be avoided by adapted training concepts.

  1. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  2. Symmetry breaking: The standard model and superstrings

    International Nuclear Information System (INIS)

    Gaillard, M.K.

    1988-01-01

    The outstanding unresolved issue of the highly successful standard model is the origin of electroweak symmetry breaking and of the mechanism that determines its scale, namely the vacuum expectation value (vev)v that is fixed by experiment at the value v = 4m//sub w//sup 2///g 2 = (√2G/sub F/)/sup /minus/1/ ≅ 1/4 TeV. In this talk I will discuss aspects of two approaches to this problem. One approach is straightforward and down to earth: the search for experimental signatures, as discussed previously by Pierre Darriulat. This approach covers the energy scales accessible to future and present laboratory experiments: roughly (10/sup /minus/9/ /minus/ 10 3 )GeV. The second approach involves theoretical speculations, such as technicolor and supersymmetry, that attempt to explain the TeV scale. 23 refs., 5 figs

  3. Symmetry breaking: The standard model and superstrings

    Energy Technology Data Exchange (ETDEWEB)

    Gaillard, M.K.

    1988-08-31

    The outstanding unresolved issue of the highly successful standard model is the origin of electroweak symmetry breaking and of the mechanism that determines its scale, namely the vacuum expectation value (vev)v that is fixed by experiment at the value v = 4m//sub w//sup 2///g/sup 2/ = (..sqrt..2G/sub F/)/sup /minus/1/ approx. = 1/4 TeV. In this talk I will discuss aspects of two approaches to this problem. One approach is straightforward and down to earth: the search for experimental signatures, as discussed previously by Pierre Darriulat. This approach covers the energy scales accessible to future and present laboratory experiments: roughly (10/sup /minus/9/ /minus/ 10/sup 3/)GeV. The second approach involves theoretical speculations, such as technicolor and supersymmetry, that attempt to explain the TeV scale. 23 refs., 5 figs.

  4. Outstanding questions: physics beyond the Standard Model

    CERN Document Server

    Ellis, John

    2012-01-01

    The Standard Model of particle physics agrees very well with experiment, but many important questions remain unanswered, among them are the following. What is the origin of particle masses and are they due to a Higgs boson? How does one understand the number of species of matter particles and how do they mix? What is the origin of the difference between matter and antimatter, and is it related to the origin of the matter in the Universe? What is the nature of the astrophysical dark matter? How does one unify the fundamental interactions? How does one quantize gravity? In this article, I introduce these questions and discuss how they may be addressed by experiments at the Large Hadron Collider, with particular attention to the search for the Higgs boson and supersymmetry.

  5. Standard model fermions and N=8 supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Nicolai, Hermann [Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, Potsdam-Golm (Germany)

    2016-07-01

    In a scheme originally proposed by Gell-Mann, and subsequently shown to be realized at the SU(3) x U(1) stationary point of maximal gauged SO(8) supergravity, the 48 spin-1/2 fermions of the theory remaining after the removal of eight Goldstinos can be identified with the 48 quarks and leptons (including right-chiral neutrinos) of the Standard model, provided one identifies the residual SU(3) with the diagonal subgroup of the color group SU(3){sub c} and a family symmetry SU(3){sub f}. However, there remained a systematic mismatch in the electric charges by a spurion charge of ± 1/6. We here identify the ''missing'' U(1) that rectifies this mismatch, and that takes a surprisingly simple, though unexpected form, and show how it is related to the conjectured R symmetry K(E10) of M Theory.

  6. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  7. The standard model 30 years of glory

    International Nuclear Information System (INIS)

    Lefrancois, J.

    2001-03-01

    In these 3 lectures the author reviews the achievements of the past 30 years, which saw the birth and the detailed confirmation of the standard model. The first lecture is dedicated to quantum chromodynamics (QCD), deep inelastic scattering, neutrino scattering results, R(e + ,e - ), scaling violation, Drell-Yan reactions and the observation of jets. The second lecture deals with weak interactions and quark and lepton families, the discovery of W and Z bosons, of charm, of the tau lepton and B quarks are detailed. The third lecture focuses on the stunning progress that have been made in accuracy concerning detectors, the typical level of accuracy of previous e + e - experiments was about 5-10%, while the accuracy obtained at LEP/SLC is of order 0.1% to 0.5%. (A.C.)

  8. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    Science.gov (United States)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  9. Ship Impact Study: Analytical Approaches and Finite Element Modeling

    Directory of Open Access Journals (Sweden)

    Pawel Woelke

    2012-01-01

    Full Text Available The current paper presents the results of a ship impact study conducted using various analytical approaches available in the literature with the results obtained from detailed finite element analysis. Considering a typical container vessel impacting a rigid wall with an initial speed of 10 knots, the study investigates the forces imparted on the struck obstacle, the energy dissipated through inelastic deformation, penetration, local deformation patterns, and local failure of the ship elements. The main objective of the paper is to study the accuracy and generality of the predictions of the vessel collision forces, obtained by means of analytical closed-form solutions, in reference to detailed finite element analyses. The results show that significant discrepancies between simplified analytical approaches and detailed finite element analyses can occur, depending on the specific impact scenarios under consideration.

  10. A new gas cooling model for semi-analytic galaxy formation models

    Science.gov (United States)

    Hou, Jun; Lacey, Cedric G.; Frenk, Carlos S.

    2018-03-01

    Semi-analytic galaxy formation models are widely used to gain insight into the astrophysics of galaxy formation and in model testing, parameter space searching and mock catalogue building. In this work, we present a new model for gas cooling in haloes in semi-analytic models, which improves over previous cooling models in several ways. Our new treatment explicitly includes the evolution of the density profile of the hot gas driven by the growth of the dark matter halo and by the dynamical adjustment of the gaseous corona as gas cools down. The effect of the past cooling history on the current mass cooling rate is calculated more accurately, by doing an integral over the past history. The evolution of the hot gas angular momentum profile is explicitly followed, leading to a self-consistent and more detailed calculation of the angular momentum of the cooled down gas. This model predicts higher cooled down masses than the cooling models previously used in GALFORM, closer to the predictions of the cooling models in L-GALAXIES and MORGANA, even though those models are formulated differently. It also predicts cooled down angular momenta that are higher than in previous GALFORM cooling models, but generally lower than the predictions of L-GALAXIES and MORGANA. When used in a full galaxy formation model, this cooling model improves the predictions for early-type galaxy sizes in GALFORM.

  11. Are Randomized Controlled Trials the (G)old Standard? From Clinical Intelligence to Prescriptive Analytics

    Science.gov (United States)

    2016-01-01

    Despite the accelerating pace of scientific discovery, the current clinical research enterprise does not sufficiently address pressing clinical questions. Given the constraints on clinical trials, for a majority of clinical questions, the only relevant data available to aid in decision making are based on observation and experience. Our purpose here is 3-fold. First, we describe the classic context of medical research guided by Poppers’ scientific epistemology of “falsificationism.” Second, we discuss challenges and shortcomings of randomized controlled trials and present the potential of observational studies based on big data. Third, we cover several obstacles related to the use of observational (retrospective) data in clinical studies. We conclude that randomized controlled trials are not at risk for extinction, but innovations in statistics, machine learning, and big data analytics may generate a completely new ecosystem for exploration and validation. PMID:27383622

  12. Analytical model of transient thermal effect on convectional cooled ...

    Indian Academy of Sciences (India)

    Abstract. The transient analytical solutions of temperature distribution, stress, strain and optical path difference in convectional cooled end-pumped laser rod are derived. The results are compared with other works and good agreements are found. The effects of increasing the edge cooling and face cooling are studied.

  13. Modeling and analytical simulation of high-temperature gas filtration ...

    African Journals Online (AJOL)

    High temperature filtration in combustion and gasification processes is a highly interdisciplinary field. Thus, particle technology in general has to be supported by elements of physics, chemistry, thermodynamics and heat and mass transfer processes. Presented in this paper is the analytical method for describing ...

  14. Ethics, Big Data, and Analytics: A Model for Application.

    OpenAIRE

    Willis, James E, III

    2013-01-01

    The use of big data and analytics to predict student success presents unique ethical questions for higher education administrators relating to the nature of knowledge; in education, "to know" entails an obligation to act on behalf of the student. The Potter Box framework can help administrators address these questions and provide a framework for action.

  15. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    Science.gov (United States)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  16. Chinese Culture, Homosexuality Stigma, Social Support and Condom Use: A Path Analytic Model

    Science.gov (United States)

    Liu, Hongjie; Feng, Tiejian; Ha, Toan; Liu, Hui; Cai, Yumao; Liu, Xiaoli; Li, Jian

    2011-01-01

    Purpose The objective of this study was to examine the interrelationships among individualism, collectivism, homosexuality-related stigma, social support, and condom use among Chinese homosexual men. Methods A cross-sectional study using the respondent-driven sampling approach was conducted among 351 participants in Shenzhen, China. Path analytic modeling was used to analyze the interrelationships. Results The results of path analytic modeling document the following statistically significant associations with regard to homosexuality: (1) higher levels of vertical collectivism were associated with higher levels of public stigma [β (standardized coefficient) = 0.12] and self stigma (β = 0.12); (2) higher levels of vertical individualism were associated with higher levels self stigma (β = 0.18); (3) higher levels of horizontal individualism were associated with higher levels of public stigma (β = 0.12); (4) higher levels of self stigma were associated with higher levels of social support from sexual partners (β = 0.12); and (5) lower levels of public stigma were associated with consistent condom use (β = −0.19). Conclusions The findings enhance our understanding of how individualist and collectivist cultures influence the development of homosexuality-related stigma, which in turn may affect individuals’ decisions to engage in HIV-protective practices and seek social support. Accordingly, the development of HIV interventions for homosexual men in China should take the characteristics of Chinese culture into consideration. PMID:21731850

  17. Analytical method of CIM to PIM transformation in Model Driven Architecture (MDA

    Directory of Open Access Journals (Sweden)

    Martin Kardos

    2010-06-01

    Full Text Available Information system’s models on higher level of abstraction have become a daily routine in many software companies. The concept of Model Driven Architecture (MDA published by standardization body OMG1 since 2001 has become a concept for creation of software applications and information systems. MDA specifies four levels of abstraction: top three levels are created as graphical models and the last one as implementation code model. Many research works of MDA are focusing on the lower levels and transformations between each other. The top level of abstraction, called Computation Independent Model (CIM and its transformation to the lower level called Platform Independent Model (PIM is not so extensive research topic. Considering to a great importance and usability of this level in practice of IS2Keywords: transformation, MDA, CIM, PIM, UML, DFD. development now our research activity is focused to this highest level of abstraction – CIM and its possible transformation to the lower PIM level. In this article we are presenting a possible solution of CIM modeling and its analytic method of transformation to PIM.

  18. Primordial lithium and the standard model(s)

    International Nuclear Information System (INIS)

    Deliyannis, C.P.; Demarque, P.; Kawaler, S.D.; Krauss, L.M.; Romanelli, P.

    1989-01-01

    We present the results of new theoretical work on surface 7 Li and 6 Li evolution in the oldest halo stars along with a new and refined analysis of the predicted primordial lithium abundance resulting from big-bang nucleosynthesis. This allows us to determine the constraints which can be imposed upon cosmology by a consideration of primordial lithium using both standard big-bang and standard stellar-evolution models. Such considerations lead to a constraint on the baryon density today of 0.0044 2 <0.025 (where the Hubble constant is 100h Km sec/sup -1/ Mpc /sup -1/), and impose limitations on alternative nucleosynthesis scenarios

  19. Ficus deltoidea Standardization: Analytical Methods for Bioactive Markers in Deltozide Tablet 200 MG

    International Nuclear Information System (INIS)

    Hazlina Ahmad Hassali; Zainah Adam; Rosniza Razali

    2016-01-01

    Standardization of herbal materials based on their chemical and biological profile is an important prerequisite for development of herbal product. The phyto pharmaceutical product that has been developed by Medical Technology Division, Malaysian Nuclear Agency is DELTOZIDE TABLET 200 MG containing 200 mg of spray-dried aqueous extract of Ficus deltoidea var kunstleri leaf as the active ingredient. Ficus deltoidea Jack or locally known as Mas Cotek is a South East Asian native plant traditionally used to treat several diseases. Pharmacological data showed that this plant exhibited good antioxidant, anti-diabetic and anti-inflammatory properties. It is important to establish the chemical profiles and determine the phytochemicals content of this plant as it is popularly used in traditional medicines. Thus, the present study reports on the comprehensive phytochemicals evaluation of bioactive markers from this extract for the development of DELTOZIDE TABLET 200 MG . Characterization of extract using LCMS/ MS Triple TOF System showed the presence of major constituents representing vitexin, isovitexin, gallic acid, catechinic, api genin, epicatechin and caffeoylquinic acid along with other minor constituents. The extract was standardized by ultra-high performance liquid chromatography (UHPLC) using two pharmacologically active markers, vitexin and isovitexin. Furthermore, qualitative determination of phytochemicals showed the presence of important phyto-constituents namely anthraquinones, terpenoids, flavonoids, tannins, phlobatannins, alkaloids, saponins, cardiac glycosides, steroids and phenols in the aqueous extract of Ficus deltoidea. Quantitative determination of phytochemicals revealed that the amount of total phenolic content (TPC; Gallic acid as standard) and total flavonoid content (TFC; Quercetin as standard) were 126.67±3.98 mg GAE/ g extract and 9.08±0.36 mg QE/ g extract respectively. The generated data provides some explanation for its wide usage in

  20. Analytical Model of Water Flow in Coal with Active Matrix

    Science.gov (United States)

    Siemek, Jakub; Stopa, Jerzy

    2014-12-01

    This paper presents new analytical model of gas-water flow in coal seams in one dimension with emphasis on interactions between water flowing in cleats and coal matrix. Coal as a flowing system, can be viewed as a solid organic material consisting of two flow subsystems: a microporous matrix and a system of interconnected macropores and fractures. Most of gas is accumulated in the microporous matrix, where the primary flow mechanism is diffusion. Fractures and cleats existing in coal play an important role as a transportation system for macro scale flow of water and gas governed by Darcy's law. The coal matrix can imbibe water under capillary forces leading to exchange of mass between fractures and coal matrix. In this paper new partial differential equation for water saturation in fractures has been formulated, respecting mass exchange between coal matrix and fractures. Exact analytical solution has been obtained using the method of characteristics. The final solution has very simple form that may be useful for practical engineering calculations. It was observed that the rate of exchange of mass between the fractures and the coal matrix is governed by an expression which is analogous to the Newton cooling law known from theory of heat exchange, but in present case the mass transfer coefficient depends not only on coal and fluid properties but also on time and position. The constant term of mass transfer coefficient depends on relation between micro porosity and macro porosity of coal, capillary forces, and microporous structure of coal matrix. This term can be expressed theoretically or obtained experimentally. W artykule zaprezentowano nowy model matematyczny przepływu wody i gazu w jednowymiarowej warstwie węglowej z uwzględnieniem wymiany masy między systemem szczelin i matrycą węglową. Węgiel jako system przepływowy traktowany jest jako układ o podwójnej porowatości i przepuszczalności, składający się z mikroporowatej matrycy węglowej oraz z

  1. Early universe cosmology. In supersymmetric extensions of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Baumann, Jochen Peter

    2012-03-19

    In this thesis we investigate possible connections between cosmological inflation and leptogenesis on the one side and particle physics on the other side. We work in supersymmetric extensions of the Standard Model. A key role is played by the right-handed sneutrino, the superpartner of the right-handed neutrino involved in the type I seesaw mechanism. We study a combined model of inflation and non-thermal leptogenesis that is a simple extension of the Minimal Supersymmetric Standard Model (MSSM) with conserved R-parity, where we add three right-handed neutrino super fields. The inflaton direction is given by the imaginary components of the corresponding scalar component fields, which are protected from the supergravity (SUGRA) {eta}-problem by a shift symmetry in the Kaehler potential. We discuss the model first in a globally supersymmetric (SUSY) and then in a supergravity context and compute the inflationary predictions of the model. We also study reheating and non-thermal leptogenesis in this model. A numerical simulation shows that shortly after the waterfall phase transition that ends inflation, the universe is dominated by right-handed sneutrinos and their out-of-equilibrium decay can produce the desired matter-antimatter asymmetry. Using a simplified time-averaged description, we derive analytical expressions for the model predictions. Combining the results from inflation and leptogenesis allows us to constrain the allowed parameter space from two different directions, with implications for low energy neutrino physics. As a second thread of investigation, we discuss a generalisation of the inflationary model discussed above to include gauge non-singlet fields as inflatons. This is motivated by the fact that in left-right symmetric, supersymmetric Grand Unified Theories (SUSY GUTs), like SUSY Pati-Salam unification or SUSY SO(10) GUTs, the righthanded (s)neutrino is an indispensable ingredient and does not have to be put in by hand as in the MSSM. We discuss

  2. Analytical and Numerical Tooth Contact Analysis (TCA of Standard and Modified Involute Profile Spur Gear

    Directory of Open Access Journals (Sweden)

    Nassear Rasheid Hmoad

    2016-03-01

    Full Text Available Among all the common mechanical transmission elements, gears still playing the most dominant role especially in the heavy duty works offering extraordinary performance under extreme conditions and that the cause behind the extensive researches concentrating on the enhancement of its durability to do its job as well as possible. Contact stress distribution within the teeth domain is considered as one of the most effective parameters characterizing gear life, performance, efficiency, and application so that it has been well sought for formal gear profiles and paid a lot of attention for moderate tooth shapes. The aim of this work is to investigate the effect of pressure angle, speed ratio, and correction factor on the maximum contact and bending stress value and principal stresses distribution for symmetric and asymmetric spur gear. The analytical investigation adopted Hertz equations to find the contact stress value, distribution, and the contact zone width while the numerical part depends on Ansys software version 15, as a FE solver with Lagrange and penalty contact algorithm. The most fruitful points to be noticed are that the increasing of pressure angle and speed ratio trends to minimize all the induced stresses for the classical gears and the altered teeth shape with larger loaded side pressure angle than the unloaded side one behave better than the symmetric teeth concerning the stress reduction.

  3. Searches for Beyond Standard Model Physics with ATLAS and CMS

    CERN Document Server

    Rompotis, Nikolaos; The ATLAS collaboration

    2017-01-01

    The exploration of the high energy frontier with ATLAS and CMS experiments provides one of the best opportunities to look for physics beyond the Standard Model. In this talk, I review the motivation, the strategy and some recent results related to beyond Standard Model physics from these experiments. The review will cover beyond Standard Model Higgs boson searches, supersymmetry and searches for exotic particles.

  4. Analytical modeling of electrical characteristics of coaxial nanowire FETs

    Science.gov (United States)

    Kargar, Alireza; Ghayour, Rahim

    2011-03-01

    In this paper, an analytical approach based on ballistic current transport is presented to investigate the electrical characteristics of the coaxial nanowire field effect transistor (CNWFET). The potential distribution along the nanowire is derived analytically by applying Laplace equation. In addition to application of WKB approximation and ballistic transport, tunneling process and quantum state of energy are implemented to determine the amount of electron transport along the nanowire from the source to the drain terminals. To consider the tunneling phenomena, WKB approximation is used and the transmission coefficients on both sides of the channel are obtained separately. In ballistic regime, an expression for channel current in terms of the bias voltages and Schottky barrier height (SBH) is derived. The results confirm a close correlation between the current equation of this work and the results presented via other approaches.

  5. The use of analytical models in human-computer interface design

    Science.gov (United States)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  6. Connected formulas for amplitudes in standard model

    Energy Technology Data Exchange (ETDEWEB)

    He, Song [CAS Key Laboratory of Theoretical Physics,Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190 (China); School of Physical Sciences, University of Chinese Academy of Sciences,No. 19A Yuquan Road, Beijing 100049 (China); Zhang, Yong [Department of Physics, Beijing Normal University,Beijing 100875 (China); CAS Key Laboratory of Theoretical Physics,Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190 (China)

    2017-03-17

    Witten’s twistor string theory has led to new representations of S-matrix in massless QFT as a single object, including Cachazo-He-Yuan formulas in general and connected formulas in four dimensions. As a first step towards more realistic processes of the standard model, we extend the construction to QCD tree amplitudes with massless quarks and those with a Higgs boson. For both cases, we find connected formulas in four dimensions for all multiplicities which are very similar to the one for Yang-Mills amplitudes. The formula for quark-gluon color-ordered amplitudes differs from the pure-gluon case only by a Jacobian factor that depends on flavors and orderings of the quarks. In the formula for Higgs plus multi-parton amplitudes, the massive Higgs boson is effectively described by two additional massless legs which do not appear in the Parke-Taylor factor. The latter also represents the first twistor-string/connected formula for form factors.

  7. Experimental tests of the standard model

    International Nuclear Information System (INIS)

    Nodulman, L.

    1998-01-01

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of α EM in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G F , most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered

  8. Experimental tests of the standard model.

    Energy Technology Data Exchange (ETDEWEB)

    Nodulman, L.

    1998-11-11

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of {alpha}{sub EM} in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G{sub F}, most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered.

  9. Analytical Model of Large Data Transactions in CoAP Networks

    Directory of Open Access Journals (Sweden)

    Alessandro Ludovici

    2014-08-01

    Full Text Available We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP and the IEEE 802.15.4 standard for medium access control (MAC. The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability.

  10. An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality

    Science.gov (United States)

    Yin, Qingli

    2013-01-01

    Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…

  11. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    Science.gov (United States)

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  12. Echinacea standardization: analytical methods for phenolic compounds and typical levels in medicinal species.

    Science.gov (United States)

    Perry, N B; Burgess, E J; Glennie, V L

    2001-04-01

    A proposed standard extraction and HPLC analysis method has been used to measure typical levels of various phenolic compounds in the medicinally used Echinacea species. Chicoric acid was the main phenolic in E. purpurea roots (mean 2.27% summer, 1.68% autumn) and tops (2.02% summer, 0.52% autumn), and echinacoside was the main phenolic in E. angustifolia (1.04%) and E. pallida roots (0.34%). Caftaric acid was the other main phenolic compound in E. purpurea roots (0.40% summer, 0.35% autumn) and tops (0.82% summer, 0.18% autumn), and cynarin was a characteristic component of E. angustifolia roots (0.12%). Enzymatic browning during extraction could reduce the measured levels of phenolic compounds by >50%. Colorimetric analyses for total phenolics correlated well with the HPLC results for E. purpurea and E. angustifolia, but the colorimetric method gave higher values.

  13. The network formation assay: a spatially standardized neurite outgrowth analytical display for neurotoxicity screening.

    Science.gov (United States)

    Frimat, Jean-Philippe; Sisnaiske, Julia; Subbiah, Subanatarajan; Menne, Heike; Godoy, Patricio; Lampen, Peter; Leist, Marcel; Franzke, Joachim; Hengstler, Jan G; van Thriel, Christoph; West, Jonathan

    2010-03-21

    We present a rapid, reproducible and sensitive neurotoxicity testing platform that combines the benefits of neurite outgrowth analysis with cell patterning. This approach involves patterning neuronal cells within a hexagonal array to standardize the distance between neighbouring cellular nodes, and thereby standardize the length of the neurite interconnections. This feature coupled with defined assay coordinates provides a streamlined display for rapid and sensitive analysis. We have termed this the network formation assay (NFA). To demonstrate the assay we have used a novel cell patterning technique involving thin film poly(dimethylsiloxane) (PDMS) microcontact printing. Differentiated human SH-SY5Y neuroblastoma cells colonized the array with high efficiency, reliably producing pattern occupancies above 70%. The neuronal array surface supported neurite outgrowth, resulting in the formation of an interconnected neuronal network. Exposure to acrylamide, a neurotoxic reference compound, inhibited network formation. A dose-response curve from the NFA was used to determine a 20% network inhibition (NI(20)) value of 260 microM. This concentration was approximately 10-fold lower than the value produced by a routine cell viability assay, and demonstrates that the NFA can distinguish network formation inhibitory effects from gross cytotoxic effects. Inhibition of the mitogen-activated protein kinase (MAPK) ERK1/2 and phosphoinositide-3-kinase (PI-3K) signaling pathways also produced a dose-dependent reduction in network formation at non-cytotoxic concentrations. To further refine the assay a simulation was developed to manage the impact of pattern occupancy variations on network formation probability. Together these developments and demonstrations highlight the potential of the NFA to meet the demands of high-throughput applications in neurotoxicology and neurodevelopmental biology.

  14. Environmental vulnerability assessment using Grey Analytic Hierarchy Process based model

    International Nuclear Information System (INIS)

    Sahoo, Satiprasad; Dhar, Anirban; Kar, Amlanjyoti

    2016-01-01

    Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, wind speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.

  15. Analytical models of optical refraction in the troposphere.

    Science.gov (United States)

    Nener, Brett D; Fowkes, Neville; Borredon, Laurent

    2003-05-01

    An extremely accurate but simple asymptotic description (with known error) is obtained for the path of a ray propagating over a curved Earth with radial variations in refractive index. The result is sufficiently simple that analytic solutions for the path can be obtained for linear and quadratic index profiles. As well as rendering the inverse problem trivial for these profiles, this formulation shows that images are uniformly magnified in the vertical direction when viewed through a quadratic refractive-index profile. Nonuniform vertical distortions occur for higher-order refractive-index profiles.

  16. Selective experimental review of the Standard Model

    International Nuclear Information System (INIS)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures

  17. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  18. On the nuclear energy generation rate in a simple analytic stellar model

    International Nuclear Information System (INIS)

    Haubold, H.J.

    1985-01-01

    For a shperically symmetric star in quasi-static equilibrium a simple analytic stellar model is presented. The common technique of integration theory of special functions for treating a special solution of the equations of stellar structure is described. As an example the sun can be considered as a fluid in hydrostatic equilibrium. The total net rate of nuclear energy generation, which is equal to the luminosity of the star, is evaluated analytically for a linear density distribution assumed for a simple stellar model. For several analytic representations of the nuclear energy generation rate the luminosity function is evaluated for the presented stellar model in closed form

  19. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    Science.gov (United States)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other

  20. Rapid Quantification of Melamine in Different Brands/Types of Milk Powders Using Standard Addition Net Analyte Signal and Near-Infrared Spectroscopy

    Directory of Open Access Journals (Sweden)

    Bang-Cheng Tang

    2016-01-01

    Full Text Available Multivariate calibration (MVC and near-infrared (NIR spectroscopy have demonstrated potential for rapid analysis of melamine in various dairy products. However, the practical application of ordinary MVC can be largely restricted because the prediction of a new sample from an uncalibrated batch would be subject to a significant bias due to matrix effect. In this study, the feasibility of using NIR spectroscopy and the standard addition (SA net analyte signal (NAS method (SANAS for rapid quantification of melamine in different brands/types of milk powders was investigated. In SANAS, the NAS vector of melamine in an unknown sample as well as in a series of samples added with melamine standards was calculated and then the Euclidean norms of series standards were used to build a straightforward univariate regression model. The analysis results of 10 different brands/types of milk powders with melamine levels 0~0.12% (w/w indicate that SANAS obtained accurate results with the root mean squared error of prediction (RMSEP values ranging from 0.0012 to 0.0029. An additional advantage of NAS is to visualize and control the possible unwanted variations during standard addition. The proposed method will provide a practically useful tool for rapid and nondestructive quantification of melamine in different brands/types of milk powders.

  1. Medical ethical standards in dermatology: an analytical study of knowledge, attitudes and practices.

    Science.gov (United States)

    Mostafa, W Z; Abdel Hay, R M; El Lawindi, M I

    2015-01-01

    Dermatology practice has not been ethically justified at all times. The objective of the study was to find out dermatologists' knowledge about medical ethics, their attitudes towards regulatory measures and their practices, and to study the different factors influencing the knowledge, the attitude and the practices of dermatologists. This is a cross-sectional comparative study conducted among 214 dermatologists, from five Academic Universities and from participants in two conferences. A 54 items structured anonymous questionnaire was designed to describe the demographical characteristics of the study group as well as their knowledge, attitude and practices regarding the medical ethics standards in clinical and research settings. Five scoring indices were estimated regarding knowledge, attitude and practice. Inferential statistics were used to test differences between groups as indicated. The Student's t-test and analysis of variance were carried out for quantitative variables. The chi-squared test was conducted for qualitative variables. The results were considered statistically significant at a P > 0.05. Analysis of the possible factors having impact on the overall scores revealed that the highest knowledge scores were among dermatologists who practice in an academic setting plus an additional place; however, this difference was statistically non-significant (P = 0.060). Female dermatologists showed a higher attitude score compared to males (P = 0.028). The highest significant attitude score (P = 0.019) regarding clinical practice was recorded among those practicing cosmetic dermatology. The different studied groups of dermatologists revealed a significant impact on the attitude score (P = 0.049), and the evidence-practice score (P dermatology research. © 2014 European Academy of Dermatology and Venereology.

  2. Standard operating procedures for pre-analytical handling of blood and urine for metabolomic studies and biobanks

    International Nuclear Information System (INIS)

    Bernini, Patrizia; Bertini, Ivano; Luchinat, Claudio; Nincheri, Paola; Staderini, Samuele; Turano, Paola

    2011-01-01

    1 H NMR metabolic profiling of urine, serum and plasma has been used to monitor the impact of the pre-analytical steps on the sample quality and stability in order to propose standard operating procedures (SOPs) for deposition in biobanks. We analyzed the quality of serum and plasma samples as a function of the elapsed time (t = 0−4 h) between blood collection and processing and of the time from processing to freezing (up to 24 h). The stability of the urine metabolic profile over time (up to 24 h) at various storage temperatures was monitored as a function of the different pre-analytical treatments like pre-storage centrifugation, filtration, and addition of the bacteriostatic preservative sodium azide. Appreciable changes in the profiles, reflecting changes in the concentration of a number of metabolites, were detected and discussed in terms of chemical and enzymatic reactions for both blood and urine samples. Appropriate procedures for blood derivatives collection and urine preservation/storage that allow maintaining as much as possible the original metabolic profile of the fresh samples emerge, and are proposed as SOPs for biobanking.

  3. Standard operating procedures for pre-analytical handling of blood and urine for metabolomic studies and biobanks

    Energy Technology Data Exchange (ETDEWEB)

    Bernini, Patrizia; Bertini, Ivano, E-mail: bertini@cerm.unifi.it; Luchinat, Claudio [University of Florence, Magnetic Resonance Center (CERM) (Italy); Nincheri, Paola; Staderini, Samuele [FiorGen Foundation (Italy); Turano, Paola [University of Florence, Magnetic Resonance Center (CERM) (Italy)

    2011-04-15

    {sup 1}H NMR metabolic profiling of urine, serum and plasma has been used to monitor the impact of the pre-analytical steps on the sample quality and stability in order to propose standard operating procedures (SOPs) for deposition in biobanks. We analyzed the quality of serum and plasma samples as a function of the elapsed time (t = 0-4 h) between blood collection and processing and of the time from processing to freezing (up to 24 h). The stability of the urine metabolic profile over time (up to 24 h) at various storage temperatures was monitored as a function of the different pre-analytical treatments like pre-storage centrifugation, filtration, and addition of the bacteriostatic preservative sodium azide. Appreciable changes in the profiles, reflecting changes in the concentration of a number of metabolites, were detected and discussed in terms of chemical and enzymatic reactions for both blood and urine samples. Appropriate procedures for blood derivatives collection and urine preservation/storage that allow maintaining as much as possible the original metabolic profile of the fresh samples emerge, and are proposed as SOPs for biobanking.

  4. Yarn supplier selection using analytical hierarchy process (AHP) and standardized unitless rating (SUR) method on textile industry

    Science.gov (United States)

    Erfaisalsyah, M. H.; Mansur, A.; Khasanah, A. U.

    2017-11-01

    For a company which engaged in the textile field, specify the supplier of raw materials for production is one important part of supply chain management which can affect the company's business processes. This study aims to identify the best suppliers of raw material suppliers of yarn for PC. PKBI based on several criteria. In this study, the integration between the Analytical Hierarchy Process (AHP) and the Standardized Unitless Rating (SUR) are used to assess the performance of the suppliers. By using AHP, it can be known the value of the relative weighting of each criterion. While SUR shows the sequence performance value of the supplier. The result of supplier ranking calculation can be used to know the strengths and weaknesses of each supplier based on its performance criteria. From the final result, it can be known which suppliers should improve their performance in order to create long term cooperation with the company.

  5. An Analytical Model for Prediction of Magnetic Flux Leakage from Surface Defects in Ferromagnetic Tubes

    Directory of Open Access Journals (Sweden)

    Suresh V.

    2016-02-01

    Full Text Available In this paper, an analytical model is proposed to predict magnetic flux leakage (MFL signals from the surface defects in ferromagnetic tubes. The analytical expression consists of elliptic integrals of first kind based on the magnetic dipole model. The radial (Bz component of leakage fields is computed from the cylindrical holes in ferromagnetic tubes. The effectiveness of the model has been studied by analyzing MFL signals as a function of the defect parameters and lift-off. The model predicted results are verified with experimental results and a good agreement is observed between the analytical and the experimental results. This analytical expression could be used for quick prediction of MFL signals and also input data for defect reconstructions in inverse MFL problem.

  6. Comparison of cosmological models using standard rulers and candles

    OpenAIRE

    Li, Xiaolei; Cao, Shuo; Zheng, Xiaogang; Li, Song; Biesiada, Marek

    2015-01-01

    In this paper, we used standard rulers and standard candles (separately and jointly) to explore five popular dark energy models under assumption of spatial flatness of the Universe. As standard rulers, we used a data set comprising 118 galactic-scale strong lensing systems (individual standard rulers if properly calibrated for the mass density profile) combined with BAO diagnostics (statistical standard ruler). Supernovae Ia served asstandard candles. Unlike in the most of previous statistica...

  7. Neutrinos: in and out of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Parke, Stephen; /Fermilab

    2006-07-01

    The particle physics Standard Model has been tremendously successful in predicting the outcome of a large number of experiments. In this model Neutrinos are massless. Yet recent evidence points to the fact that neutrinos are massive particles with tiny masses compared to the other particles in the Standard Model. These tiny masses allow the neutrinos to change flavor and oscillate. In this series of Lectures, I will review the properties of Neutrinos In the Standard Model and then discuss the physics of Neutrinos Beyond the Standard Model. Topics to be covered include Neutrino Flavor Transformations and Oscillations, Majorana versus Dirac Neutrino Masses, the Seesaw Mechanism and Leptogenesis.

  8. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault diag...... diagnosis methods, often viewed as the classical or deterministic ones. Emphasis is placed on the algorithms suitable for ship automation, unmanned underwater vehicles, and other systems of automatic control....

  9. 3D Analytical Model for a Tubular Linear Induction Generator in a Stirling Cogeneration System

    OpenAIRE

    Francois , Pierre; Garcia Burel , Isabelle; BEN AHMED , Hamid; Prevond , Laurent; Multon , Bernard

    2007-01-01

    International audience; This article sets forth a 3D analytical model of a tubular linear induction generator. In the intended application, the slot and edge effects as well as induced current penetration phenomena within the solid mover cannot be overlooked. Moreover, generator optimization within the present context of cogeneration has necessitated a systemic strategy. Reliance upon an analytical modeling approach that incorporates the array of typically-neglected phenomena has proven essen...

  10. Assessing the service quality of Iran military hospitals: Joint Commission International standards and Analytic Hierarchy Process (AHP) technique.

    Science.gov (United States)

    Bahadori, Mohammadkarim; Ravangard, Ramin; Yaghoubi, Maryam; Alimohammadzadeh, Khalil

    2014-01-01

    Military hospitals are responsible for preserving, restoring and improving the health of not only armed forces, but also other people. According to the military organizations strategy, which is being a leader and pioneer in all areas, providing quality health services is one of the main goals of the military health care organizations. This study was aimed to evaluate the service quality of selected military hospitals in Iran based on the Joint Commission International (JCI) standards and comparing these hospitals with each other and ranking them using the analytic hierarchy process (AHP) technique in 2013. This was a cross-sectional and descriptive study conducted on five military hospitals, selected using the purposive sampling method, in 2013. Required data collected using checklists of accreditation standards and nominal group technique. AHP technique was used for prioritizing. Furthermore, Expert Choice 11.0 was used to analyze the collected data. Among JCI standards, the standards of access to care and continuity of care (weight = 0.122), quality improvement and patient safety (weight = 0.121) and leadership and management (weight = 0.117) had the greatest importance, respectively. Furthermore, in the overall ranking, BGT (weight = 0.369), IHM (0.238), SAU (0.202), IHK (weight = 0.125) and SAB (weight = 0.066) ranked first to fifth, respectively. AHP is an appropriate technique for measuring the overall performance of hospitals and their quality of services. It is a holistic approach that takes all hospital processes into consideration. The results of the present study can be used to improve hospitals performance through identifying areas, which are in need of focus for quality improvement and selecting strategies to improve service quality.

  11. Prospects of experimentally reachable beyond Standard Model ...

    Indian Academy of Sciences (India)

    2016-01-06

    Jan 6, 2016 ... behaviour of the newly discovered particles and their strange interactions, during the first half of the 20th century, was culminated with the introduction of Standard ... various limitations. For a good summary on its excellencies and compulsions see [1], and for extensive details on SM and beyond, see [2].

  12. Regional technical cooperation model project, IAEA - RER/2/2004 ''quality control and quality assurance for nuclear analytical techniques'

    International Nuclear Information System (INIS)

    Arikan, P.

    2002-01-01

    An analytical laboratory should produce high quality analytical data through the use of analytical measurements that is accurate, reliable and adequate for the intended purpose. This objective can be accomplished in a cost-effective manner under a planned and documented quality system of activities. It is well-known that serious deficiencies can occur in laboratory operations when insufficient attention is given to the quality of the work. It requires not only a thorough knowledge of the laboratory's purpose and operation, but also the dedication of the management and operating staff to standards of excellence. Laboratories employing nuclear and nuclear-related analytical techniques are sometimes confronted with performance problems which prevent them from becoming accepted and respected by clients, such as industry, government and regulatory bodies, and from being eligible for contracts. The International Standard ISO 17025 has been produced as the result of extensive experience in the implementation of ISO/IEC Guide 25:1990 and EN 45001:1989, which replaces both of them now. It contains all of the requirements that testing and calibration laboratories must meet if they wish to demonstrate that they operate a quality system that is technically competent, and are able to generate technically valid results. The use of ISO 17025 should facilitate cooperation between laboratories and other bodies to assist in the exchange of information and experience, and in the harmonization of standards and procedures. IAEA model project RER/2/004 entitled 'Quality Assurance/Quality Control in Nuclear Analytical Techniques' was initiated in 1999 as a Regional TC project in East European countries to assist Member State laboratories in the region to install a complete quality system according to the ISO/IEC 17025 standard. 12 laboratories from 11 countries plus the Agency's Laboratories in Seibersdorf have been selected as participants to undergo exercises and training with the

  13. Why supersymmetry? Physics beyond the standard model

    Indian Academy of Sciences (India)

    2016-08-23

    Aug 23, 2016 ... This leads to an estimate of the naturalness breakdown scale for the electroweak theory as: N ∼ 1 TeV. 3. .... For supersymmetric model build- ing, see ref. [10]. Simplest supersymmetric model is ... gent restrictions for supersymmetry model building come from the requirement of sufficient suppression.

  14. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  15. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  16. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Directory of Open Access Journals (Sweden)

    René Felix Reinhart

    2017-02-01

    Full Text Available Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  17. HPC in global geo dynamics: Advances in normal-mode analytical modelling

    International Nuclear Information System (INIS)

    Melini, D.

    2009-01-01

    Analytical models based on normal-mode theory have been successfully employed for decades in the modeling of global response of the Earth to seismic dislocations, post glacial rebound and wave propagation. Despite their limited capabilities with respect to fully numerical approaches, they are yet a valuable modeling tool, for instance in benchmarking applications or when automated procedures have to be implemented, as in massive inversion problems when a large number of forward models have to be solved. The availability of high-performance computer systems ignited new applications for analytical modeling, allowing to re- move limiting approximations and to carry out extensive simulations on large global datasets.

  18. Force 2025 and Beyond Strategic Force Design Analytic Model

    Science.gov (United States)

    2017-01-12

    the methodology used to construct force design models. The Summary section provides a summary of our findings. Background By 2025, a leaner ...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We...from a model constructed using this methodology in a case study. 15. SUBJECT TERMS Force design, mixed integer programming, optimization, value

  19. Orthogonal analytical methods for botanical standardization: determination of green tea catechins by qNMR and LC-MS/MS.

    Science.gov (United States)

    Napolitano, José G; Gödecke, Tanja; Lankin, David C; Jaki, Birgit U; McAlpine, James B; Chen, Shao-Nong; Pauli, Guido F

    2014-05-01

    The development of analytical methods for parallel characterization of multiple phytoconstituents is essential to advance the quality control of herbal products. While chemical standardization is commonly carried out by targeted analysis using gas or liquid chromatography-based methods, more universal approaches based on quantitative (1)H NMR (qHNMR) measurements are being used increasingly in the multi-targeted assessment of these complex mixtures. The present study describes the development of a 1D qHNMR-based method for simultaneous identification and quantification of green tea constituents. This approach utilizes computer-assisted (1)H iterative Full Spin Analysis (HiFSA) and enables rapid profiling of seven catechins in commercial green tea extracts. The qHNMR results were cross-validated against quantitative profiles obtained with an orthogonal LC-MS/MS method. The relative strengths and weaknesses of both approaches are discussed, with special emphasis on the role of identical reference standards in qualitative and quantitative analyses. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Energy demand analytics using coupled technological and economic models

    Science.gov (United States)

    Impacts of a range of policy scenarios on end-use energy demand are examined using a coupling of MARKAL, an energy system model with extensive supply and end-use technological detail, with Inforum LIFT, a large-scale model of the us. economy with inter-industry, government, and c...

  1. Analytical modeling of pipeline failure in multiphase flow due to ...

    African Journals Online (AJOL)

    This research focuses on the development of a model that can predict pipeline failure due to corrosion in multiphase flows. The role that velocity, density, water cut and other parameters play in predicting corrosion is critically analyzed with Norsok Model. The result shows that velocity plays a key role in corrosion prediction.

  2. Short-Term Forecasting Models for Photovoltaic Plants: Analytical versus Soft-Computing Techniques

    OpenAIRE

    Monteiro, Claudio; Fernandez-Jimenez, L. Alfredo; Ramirez-Rosado, Ignacio J.; Muñoz-Jimenez, Andres; Lara-Santillan, Pedro M.

    2013-01-01

    We present and compare two short-term statistical forecasting models for hourly average electric power production forecasts of photovoltaic (PV) plants: the analytical PV power forecasting model (APVF) and the multiplayer perceptron PV forecasting model (MPVF). Both models use forecasts from numerical weather prediction (NWP) tools at the location of the PV plant as well as the past recorded values of PV hourly electric power production. The APVF model consists of an original modeling for adj...

  3. Heterogeneous information network model for equipment-standard system

    Science.gov (United States)

    Yin, Liang; Shi, Li-Chen; Zhao, Jun-Yan; Du, Song-Yang; Xie, Wen-Bo; Yuan, Fei; Chen, Duan-Bing

    2018-01-01

    Entity information network is used to describe structural relationships between entities. Taking advantage of its extension and heterogeneity, entity information network is more and more widely applied to relationship modeling. Recent years, lots of researches about entity information network modeling have been proposed, while seldom of them concentrate on equipment-standard system with properties of multi-layer, multi-dimension and multi-scale. In order to efficiently deal with some complex issues in equipment-standard system such as standard revising, standard controlling, and production designing, a heterogeneous information network model for equipment-standard system is proposed in this paper. Three types of entities and six types of relationships are considered in the proposed model. Correspondingly, several different similarity-measuring methods are used in the modeling process. The experiments show that the heterogeneous information network model established in this paper can reflect relationships between entities accurately. Meanwhile, the modeling process has a good performance on time consumption.

  4. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  5. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    Science.gov (United States)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  6. Comparative study of different analytical approaches for modelling the transmission of sound waves through turbomachinery stators

    NARCIS (Netherlands)

    Behn, Maximilian; Tapken, Ulf; Puttkammer, Peter; Hagmeijer, Rob; Thouault, Nicolas

    2016-01-01

    The present study is dealing with the analytical modelling of sound transmission through turbomachinery stators. Two-dimensional cascade models are applied in combination with a newly proposed impedance model to account for the effect of flow deflection on the propagation of acoustic modes in

  7. Analytical Modeling of Unsteady Aluminum Depletion in Thermal Barrier Coatings

    OpenAIRE

    YEŞİLATA, Bülent

    2014-01-01

    The oxidation behavior of thermal barrier coatings (TBCs) in aircraft turbines is studied. A simple, unsteady and one-dimensional, diffusion model based on aluminum depletion from a bond-coat to form an oxide layer of Al2O3 is introduced. The model is employed for a case study with currently available experimental data. The diffusion coefficient of the depleted aluminum in the alloy, the concentration profiles at different oxidation times, and the thickness of Al-depleted region are...

  8. Working group report: Beyond the standard model

    Indian Academy of Sciences (India)

    Superstring-inspired phenomenology: This included. – models of low-scale quantum gravity with one or more extra dimensions,. – noncommutative geometry and gauge theories,. – string-inspired grand unification. • Models of supersymmetry-breaking: This included. – Supersymmetry-breaking in minimal supergravity ...

  9. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  10. Towards a quality model for semantic IS standards

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert; van Soest, J.

    2012-01-01

    This research focuses on developing a quality model for semantic information system (IS) standards. A lot of semantic IS standards are available in different industries. Often these standards are developed by a dedicated organisation. While these organisations have the goal of increasing

  11. Towards a quality model for semantic IS standards

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert; van Soest, Joris

    2011-01-01

    This research focuses on developing a quality model for semantic Information System (IS) standards. A lot of semantic IS standards are available in different industries. Often these standards are developed by a dedicated organization. While these organizations have the goal of increasing

  12. The role of analytical models: Issues and frontiers

    International Nuclear Information System (INIS)

    Blair, P.

    1991-03-01

    A number of modeling attempts to analyze the implications of increasing competition in the electric power industry appeared in the early 1970s and occasionally throughout the early 1980s. Most of these of these analyses, however, considered only modest mechanisms to facilitate increased bulk power transactions between utility systems. More fundamental changes in market structure, such as the existence of independent power producers or wheeling transactions between customers and utility producers, were not considered. More recently in the course of the policy debate over increasing competition, a number of models have been used to analyze altemative scenarios of industry structure and regulation. In this Energy Modeling Forum (EMF) exercise, we attempted to challenge existing modeling frameworks beyond their original design capabilities. We tried to interpret altemative scenarios or other means of increasing competition in the electric power industry in the terms of existing modeling frameworks, to gain perspective using such models on how the different market players would interact, and to predict how electricity prices and other indicators of industry behavior might evolve under the altemative scenarios

  13. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  14. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Standardized training in nurse model travel clinics.

    Science.gov (United States)

    Sofarelli, Theresa A; Ricks, Jane H; Anand, Rahul; Hale, Devon C

    2011-01-01

    International travel plays a significant role in the emergence and redistribution of major human diseases. The importance of travel medicine clinics for preventing morbidity and mortality has been increasingly appreciated, although few studies have thus far examined the management and staff training strategies that result in successful travel-clinic operations. Here, we describe an example of travel-clinic operation and management coordinated through the University of Utah School of Medicine, Division of Infectious Diseases. This program, which involves eight separate clinics distributed statewide, functions both to provide patient consult and care services, as well as medical provider training and continuing medical education (CME). Initial training, the use of standardized forms and protocols, routine chart reviews and monthly continuing education meetings are the distinguishing attributes of this program. An Infectious Disease team consisting of one medical doctor (MD) and a physician assistant (PA) act as consultants to travel nurses who comprise the majority of clinic staff. Eight clinics distributed throughout the state of Utah serve approximately 6,000 travelers a year. Pre-travel medical services are provided by 11 nurses, including 10 registered nurses (RNs) and 1 licensed practical nurse (LPN). This trained nursing staff receives continuing travel medical education and participate in the training of new providers. All nurses have completed a full training program and 7 of the 11 (64%) of clinic nursing staff serve more than 10 patients a week. Quality assurance measures show that approximately 0.5% of charts reviewed contain a vaccine or prescription error which require patient notification for correction. Using an initial training program, standardized patient intake forms, vaccine and prescription protocols, preprinted prescriptions, and regular CME, highly trained nurses at travel clinics are able to provide standardized pre-travel care to

  16. A comparison between analytic and numerical methods for modelling automotive dissipative silencers with mean flow

    Science.gov (United States)

    Kirby, R.

    2009-08-01

    Identifying an appropriate method for modelling automotive dissipative silencers normally requires one to choose between analytic and numerical methods. It is common in the literature to justify the choice of an analytic method based on the assumption that equivalent numerical techniques are more computationally expensive. The validity of this assumption is investigated here, and the relative speed and accuracy of two analytic methods are compared to two numerical methods for a uniform dissipative silencer that contains a bulk reacting porous material separated from a mean gas flow by a perforated pipe. The numerical methods are developed here with a view to speeding up transmission loss computation, and are based on a mode matching scheme and a hybrid finite element method. The results presented demonstrate excellent agreement between the analytic and numerical models provided a sufficient number of propagating acoustic modes are retained. However, the numerical mode matching method is shown to be the fastest method, significantly outperforming an equivalent analytic technique. Moreover, the hybrid finite element method is demonstrated to be as fast as the analytic technique. Accordingly, both numerical techniques deliver fast and accurate predictions and are capable of outperforming equivalent analytic methods for automotive dissipative silencers.

  17. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    Science.gov (United States)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  18. An Analytic Approach to Developing Transport Threshold Models of Neoclassical Tearing Modes in Tokamaks

    International Nuclear Information System (INIS)

    Mikhailovskii, A.B.; Shirokov, M.S.; Konovalov, S.V.; Tsypin, V.S.

    2005-01-01

    Transport threshold models of neoclassical tearing modes in tokamaks are investigated analytically. An analysis is made of the competition between strong transverse heat transport, on the one hand, and longitudinal heat transport, longitudinal heat convection, longitudinal inertial transport, and rotational transport, on the other hand, which leads to the establishment of the perturbed temperature profile in magnetic islands. It is shown that, in all these cases, the temperature profile can be found analytically by using rigorous solutions to the heat conduction equation in the near and far regions of a chain of magnetic islands and then by matching these solutions. Analytic expressions for the temperature profile are used to calculate the contribution of the bootstrap current to the generalized Rutherford equation for the island width evolution with the aim of constructing particular transport threshold models of neoclassical tearing modes. Four transport threshold models, differing in the underlying competing mechanisms, are analyzed: collisional, convective, inertial, and rotational models. The collisional model constructed analytically is shown to coincide exactly with that calculated numerically; the reason is that the analytical temperature profile turns out to be the same as the numerical profile. The results obtained can be useful in developing the next generation of general threshold models. The first steps toward such models have already been made

  19. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    Directory of Open Access Journals (Sweden)

    Jian Lu

    2011-08-01

    Full Text Available Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application.

  20. An analytical model for hydraulic fracturing in shallow bedrock formations.

    Science.gov (United States)

    dos Santos, José Sérgio; Ballestero, Thomas Paul; Pitombeira, Ernesto da Silva

    2011-01-01

    A theoretical method is proposed to estimate post-fracturing fracture size and transmissivity, and as a test of the methodology, data collected from two wells were used for verification. This method can be employed before hydrofracturing in order to obtain estimates of the potential hydraulic benefits of hydraulic fracturing. Five different pumping test analysis methods were used to evaluate the well hydraulic data. The most effective methods were the Papadopulos-Cooper model (1967), which includes wellbore storage effects, and the Gringarten-Ramey model (1974), known as the single horizontal fracture model. The hydraulic parameters resulting from fitting these models to the field data revealed that as a result of hydraulic fracturing, the transmissivity increased more than 46 times in one well and increased 285 times in the other well. The model developed by dos Santos (2008), which considers horizontal radial fracture propagation from the hydraulically fractured well, was used to estimate potential fracture geometry after hydrofracturing. For the two studied wells, their fractures could have propagated to distances of almost 175 m or more and developed maximum apertures of about 2.20 mm and hydraulic apertures close to 0.30 mm. Fracturing at this site appears to have expanded and propagated existing fractures and not created new fractures. Hydraulic apertures calculated from pumping test analyses closely matched the results obtained from the hydraulic fracturing model. As a result of this model, post-fracturing geometry and resulting post-fracturing well yield can be estimated before the actual hydrofracturing. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  1. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  2. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  3. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    vot reveal any adverse behaviour. In fact the observed traffic seemed very close to what would be expected from Poisson traffic. The Changeover/Changeback procedure in SS7, which is used to redirect traffic in case of link failure, has been analyzed. The transient behaviour during a Changeover...... scenario was modelled using Markovian models. The Ordinary Differential Equations arising from these models were solved numerically. The results obtained seemed very similar to those obtained using a different method in previous work by Akinpelu & Skoog 1985. Recent measurement studies of packet traffic...... is found by noting the close relationship with the expressions for the corresponding infinite queue. For the special case of a batch Poisson arrival process this observation makes it possible to express the queue length at an arbitrary in terms of the corresponding queue lengths for the infinite case....

  4. The thermal evolution of universe: standard model

    International Nuclear Information System (INIS)

    Nascimento, L.C.S. do.

    1975-08-01

    A description of the dynamical evolution of the Universe following a model based on the theory of General Relativity is made. The model admits the Cosmological principle,the principle of Equivalence and the Robertson-Walker metric (of which an original derivation is presented). In this model, the universe is considered as a perfect fluid, ideal and symmetric relatively to the number of particles and antiparticles. The thermodynamic relations deriving from these hypothesis are derived, and from them the several eras of the thermal evolution of the universe are established. Finally, the problems arising from certain specific predictions of the model are studied, and the predictions of the abundances of the elements according to nucleosynthesis and the actual behavior of the universe are analysed in detail. (author) [pt

  5. Toward a Standard Model of Core Collapse Supernovae

    OpenAIRE

    Mezzacappa, A.

    2000-01-01

    In this paper, we discuss the current status of core collapse supernova models and the future developments needed to achieve significant advances in understanding the supernova mechanism and supernova phenomenology, i.e., in developing a supernova standard model.

  6. Analytical model and behavioral simulation approach for a ΣΔ fractional-N synthesizer employing a sample-hold element

    DEFF Research Database (Denmark)

    Cassia, Marco; Shah, Peter Jivan; Bruun, Erik

    2003-01-01

    A previously unknown intrinsic nonlinearity of standard SigmaDelta fractional-N synthesizers is identified. A general analytical model for SigmaDelta fractional-N phased-locked loops (PLLs) that includes the effect of the nonlinearity is derived and an improvement to the synthesizer topology is d...

  7. An Analytic Model Approach to the Frequency of Exoplanets

    Science.gov (United States)

    Traub, Wesley A.

    2016-10-01

    The underlying population of exoplanets around stars in the Kepler sample can be inferred by a simulation that includes binning the Kepler planets in radius and period, invoking an empirical noise model, assuming a model exoplanet distribution function, randomly assigning planets to each of the Kepler target stars, asking whether each planet's transit signal could be detected by Kepler, binning the resulting simulated detections, comparing the simulations with the observed data sample, and iterating on the model parameters until a satisfactory fit is obtained. The process is designed to simulate the Kepler observing procedure. The key assumption is that the distribution function is the product of separable functions of period and radius. Any additional suspected biases in the sample can be handled by adjusting the noise model or selective editing of the range of input planets. An advantage of this overall procedure is that it is a forward calculation designed to simulate the observed data, subject to a presumed underlying population distribution, minimizing the effect of bin-to-bin fluctuations. Another advantage is that the resulting distribution function can be extended to values of period and radius that go beyond the sample space, including, for example, application to estimating eta-sub-Earth, and also estimating the expected science yields of future direct-imaging exoplanet missions such as WFIRST-AFTA.

  8. SPOTROD: Semi-analytic model for transits of spotted stars

    Science.gov (United States)

    Béky, Bence

    2014-11-01

    SPOTROD is a model for planetary transits of stars with an arbitrary limb darkening law and a number of homogeneous, circular spots on their surface. It facilitates analysis of anomalies due to starspot eclipses, and is a free, open source implementation written in C with a Python API.

  9. a new analytical modeling method for photovoltaic solar cells based

    African Journals Online (AJOL)

    Zieba Falama R, Dadjé A, Djongyang N and Doka S.Y

    2016-05-01

    May 1, 2016 ... network. However, its exploitation requires a conception and implantation of a production system which also require a good sizing in order to avoid losses or lack of energy. The evaluation of the maximum power produced by a PV generator is very important to size a. PV system. Several models have been ...

  10. Analytical results for the Sznajd model of opinion formation

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Lavička, H.

    2003-01-01

    Roč. 35, - (2003), s. 279-288 ISSN 1434-6028 R&D Projects: GA ČR GA202/01/1091 Institutional research plan: CEZ:AV0Z1010914 Keywords : agent models * sociophysics Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2003

  11. Analytic properties of the Ruelle ζ-function for mean field models of phase transition

    International Nuclear Information System (INIS)

    Hallerberg, Sarah; Just, Wolfram; Radons, Guenter

    2005-01-01

    We evaluate by analytical means the Ruelle ζ-function for a spin model with global coupling. The implications of the ferromagnetic phase transitions for the analytical properties of the ζ-function are discussed in detail. In the paramagnetic phase the ζ-function develops a single branch point. In the low-temperature regime two branch points appear which correspond to the ferromagnetic state and the metastable state. The results are typical for any Ginsburg-Landau-type phase transition

  12. Characterisation and analytical modeling of GaN HEMT-based varactor diodes

    OpenAIRE

    Hamdoun , Abdelaziz; Roy , L.; Himdi , Mohamed; Lafond , Olivier

    2015-01-01

    International audience; Varactor diodes fabricated in 0.5 and 0.15 μm GaN HEMT (high-electron-mobility transistor) processes are modelled. The devices were characterised via DC and RF small-signal measurements up to 20 GHz, and fitted to a simple physical equivalent circuit. Approximate analytical expressions containing empirical coefficients are introduced for the voltage dependency of capacitance and series resistance. The analytical solutions agree remarkably well with the experimentally e...

  13. Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.

    Science.gov (United States)

    Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U

    2015-05-01

    The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Standard Model-like corrections to Dilatonic Dynamics

    DEFF Research Database (Denmark)

    Antipin, Oleg; Krog, Jens; Mølgaard, Esben

    2013-01-01

    We examine the effects of standard model-like interactions on the near-conformal dynamics of a theory featuring a dilatonic state identified with the standard model-like Higgs. As template for near-conformal dynamics we use a gauge theory with fermionic matter and elementary mesons possessing...... conformal dynamics could accommodate the observed Higgs-like properties....

  15. Can An Amended Standard Model Account For Cold Dark Matter?

    International Nuclear Information System (INIS)

    Goldhaber, Maurice

    2004-01-01

    It is generally believed that one has to invoke theories beyond the Standard Model to account for cold dark matter particles. However, there may be undiscovered universal interactions that, if added to the Standard Model, would lead to new members of the three generations of elementary fermions that might be candidates for cold dark matter particles

  16. The Standard Model from LHC to future colliders

    Energy Technology Data Exchange (ETDEWEB)

    Forte, S., E-mail: forte@mi.infn.it [Dipartimento di Fisica, Università di Milano, Via Celoria 16, 20133, Milan (Italy); INFN, Sezione di Milano, Via Celoria 16, 20133, Milan (Italy); Nisati, A. [INFN, Sezione di Roma, Piazzale Aldo Moro 2, 00185, Rome (Italy); Passarino, G. [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, 10125, Turin (Italy); INFN, Sezione di Torino, Via P. Giuria 1, 10125, Turin (Italy); Tenchini, R. [INFN, Sezione di Pisa, Largo B. Pontecorvo 3, 56127, Pisa (Italy); Calame, C. M. Carloni [Dipartimento di Fisica, Università di Pavia, via Bassi 6, 27100, Pavia (Italy); Chiesa, M. [INFN, Sezione di Pavia, via Bassi 6, 27100, Pavia (Italy); Cobal, M. [Dipartimento di Chimica, Fisica e Ambiente, Università di Udine, Via delle Scienze, 206, 33100, Udine (Italy); INFN, Gruppo Collegato di Udine, Via delle Scienze, 206, 33100, Udine (Italy); Corcella, G. [INFN, Laboratori Nazionali di Frascati, Via E. Fermi 40, 00044, Frascati (Italy); Degrassi, G. [Dipartimento di Matematica e Fisica, Università’ Roma Tre, Via della Vasca Navale 84, 00146, Rome (Italy); INFN, Sezione di Roma Tre, Via della Vasca Navale 84, 00146, Rome (Italy); Ferrera, G. [Dipartimento di Fisica, Università di Milano, Via Celoria 16, 20133, Milan (Italy); INFN, Sezione di Milano, Via Celoria 16, 20133, Milan (Italy); Magnea, L. [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, 10125, Turin (Italy); INFN, Sezione di Torino, Via P. Giuria 1, 10125, Turin (Italy); Maltoni, F. [Centre for Cosmology, Particle Physics and Phenomenology (CP3), Université Catholique de Louvain, 1348, Louvain-la-Neuve (Belgium); Montagna, G. [Dipartimento di Fisica, Università di Pavia, via Bassi 6, 27100, Pavia (Italy); INFN, Sezione di Pavia, via Bassi 6, 27100, Pavia (Italy); Nason, P. [INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, 20126, Milan (Italy); Nicrosini, O. [INFN, Sezione di Pavia, via Bassi 6, 27100, Pavia (Italy); Oleari, C. [Dipartimento di Fisica, Università di Milano-Bicocca, Piazza della Scienza 3, 20126, Milan (Italy); INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, 20126, Milan (Italy); Piccinini, F. [INFN, Sezione di Pavia, via Bassi 6, 27100, Pavia (Italy); Riva, F. [Institut de Théorie des Phénoménes Physiques, École Polytechnique Fédérale de Lausanne, 1015, Lausanne (Switzerland); Vicini, A. [Dipartimento di Fisica, Università di Milano, Via Celoria 16, 20133, Milan (Italy); INFN, Sezione di Milano, Via Celoria 16, 20133, Milan (Italy)

    2015-11-25

    This review summarizes the results of the activities which have taken place in 2014 within the Standard Model Working Group of the “What Next” Workshop organized by INFN, Italy. We present a framework, general questions, and some indications of possible answers on the main issue for Standard Model physics in the LHC era and in view of possible future accelerators.

  17. Neutrinos and Physics Beyond Electroweak and Cosmological Standard Models

    CERN Document Server

    Kirilova, Daniela

    2014-01-01

    This is a short review of the established and the proposed by physics beyond Standard Electroweak Model and beyond Standard Cosmological Model neutrino characteristics. In particular, cosmological effects of and cosmological constraints on: extra neutrino families, neutrino mass differences and mixing, lepton asymmetry in the neutrino sector, neutrino masses, light sterile neutrino, are discussed.

  18. Fuel assembly bow: analytical modeling and resulting design improvements

    International Nuclear Information System (INIS)

    Stabel, J.; Huebsch, H.P.

    1995-01-01

    The bowing of fuel assemblies may result in a contact between neighbouring fuel assemblies and in connection with a vibration to a resulting wear or even perforation at the corners of the spacer grids of neighbouring assemblies. Such events allowed reinsertion of a few fuel assemblies in Germany only after spacer repair. In order to identify the most sensitive parameters causing the observed bowing of fuel assemblies a new computer model was develop which takes into a account the highly nonlinear behaviour of the interaction between fuel rods and spacers. As a result of the studies performed with this model, design improvements such as a more rigid connection between guide thimbles and spacer grids, could be defined. First experiences with this improved design show significantly better fuel behaviour. (author). 5 figs., 1 tabs

  19. An Analytic Model Of Thermal Drift In Piezoresistive Microcantilever Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Loui, A; Elhadj, S; Sirbuly, D J; McCall, S K; Hart, B R; Ratto, T V

    2009-08-26

    A closed form semi-empirical model has been developed to understand the physical origins of thermal drift in piezoresistive microcantilever sensors. The two-component model describes both the effects of temperature-related bending and heat dissipation on the piezoresistance. The temperature-related bending component is based on the Euler-Bernoulli theory of elastic deformation applied to a multilayer cantilever. The heat dissipation component is based on energy conservation per unit time for a piezoresistive cantilever in a Wheatstone bridge circuit, representing a balance between electrical power input and heat dissipation into the environment. Conduction and convection are found to be the primary mechanisms of heat transfer, and the dependence of these effects on the thermal conductivity, temperature, and flow rate of the gaseous environment is described. The thermal boundary layer value which defines the length scale of the heat dissipation phenomenon is treated as an empirical fitting parameter. Using the model, it is found that the cantilever heat dissipation is unaffected by the presence of a thin polymer coating, therefore the residual thermal drift in the differential response of a coated and uncoated cantilever is the result of non-identical temperature-related bending. Differential response data shows that residual drift is eliminated under isothermal laboratory conditions but not the unregulated and variable conditions that exist in the outdoor environment (i.e., the field). The two-component model is then validated by simulating the thermal drifts of an uncoated and a coated piezoresistive cantilever under field conditions over a 24 hour period using only meteorological data as input.

  20. An Analytic Model for DoD Divestments

    Science.gov (United States)

    2015-04-30

    double counting, especially if more than one or a complex intervention is being assessed. Based on models of social investment, social entrepreneurship ...Background Organizations invest and divest resources to prepare for the future and respond to events or conditions in the relevant social , political...hobby, speculation) have various tendencies to “stick to the status quo” and not divest.15 Leveraging Other Fields: Social Return on Investment

  1. Analytical model describes ion conduction in fuel cell membranes

    Science.gov (United States)

    Herbst, Daniel; Tse, Steve; Witten, Thomas

    2014-03-01

    Many fuel cell designs employ polyelectrolyte membranes, but little is known about how to tune the parameters (water level, morphology, etc.) to maximize ion conductivity. We came up with a simple model based on a random, discrete water distribution and ion confinement due to neighboring polymer. The results quantitatively agree with molecular dynamics (MD) simulations and explain experimental observations. We find that when the ratio of water volume to polymer volume, Vw /Vp , is small, the predicted ion self-diffusion coefficient scales roughly as Dw T√{Vw /Vp } exp(- ⋯Vp /Vw) , where Dw T is the limiting value in pure water at temperature T . At high water levels the model also agrees with MD simulation, plateauing to Dw T . The model predicts a maximum conductivity at a water level higher than is typically used, and that it would be beneficial to increase water retention even at the expense of lower ion concentration. Also, membranes would conduct better if they phase-separated into water-rich and polymer-rich regions. US ARMY MURI #W911NF-10-1-0520.

  2. Prospects of experimentally reachable beyond Standard Model ...

    Indian Academy of Sciences (India)

    2016-01-06

    Jan 6, 2016 ... Dirac mass MH = ±M + μS/2. As μS does not play much role in any other prediction, we assume that it fits the neutrino oscillation data and one can determine it by inverting the inverse see-saw formula and using experimental results of neutrino masses and mixings. The model achieves precision gauge ...

  3. Standardization of A Physiologic Hypoparathyroidism Animal Model.

    Science.gov (United States)

    Jung, Soo Yeon; Kim, Ha Yeong; Park, Hae Sang; Yin, Xiang Yun; Chung, Sung Min; Kim, Han Su

    2016-01-01

    Ideal hypoparathyroidism animal models are a prerequisite to developing new treatment modalities for this disorder. The purpose of this study was to evaluate the feasibility of a model whereby rats were parathyroidectomized (PTX) using a fluorescent-identification method and the ideal calcium content of the diet was determined. Thirty male rats were divided into surgical sham (SHAM, n = 5) and PTX plus 0, 0.5, and 2% calcium diet groups (PTX-FC (n = 5), PTX-NC (n = 10), and PTX-HC (n = 10), respectively). Serum parathyroid hormone levels decreased to non-detectable levels in all PTX groups. All animals in the PTX-FC group died within 4 days after the operation. All animals survived when supplied calcium in the diet. However, serum calcium levels were higher in the PTX-HC than the SHAM group. The PTX-NC group demonstrated the most representative modeling of primary hypothyroidism. Serum calcium levels decreased and phosphorus levels increased, and bone volume was increased. All animals survived without further treatment and did not show nephrotoxicity including calcium deposits. These findings demonstrate that PTX animal models produced by using the fluorescent-identification method, and fed a 0.5% calcium diet, are appropriate for hypoparathyroidism treatment studies.

  4. Standardization of A Physiologic Hypoparathyroidism Animal Model.

    Directory of Open Access Journals (Sweden)

    Soo Yeon Jung

    Full Text Available Ideal hypoparathyroidism animal models are a prerequisite to developing new treatment modalities for this disorder. The purpose of this study was to evaluate the feasibility of a model whereby rats were parathyroidectomized (PTX using a fluorescent-identification method and the ideal calcium content of the diet was determined. Thirty male rats were divided into surgical sham (SHAM, n = 5 and PTX plus 0, 0.5, and 2% calcium diet groups (PTX-FC (n = 5, PTX-NC (n = 10, and PTX-HC (n = 10, respectively. Serum parathyroid hormone levels decreased to non-detectable levels in all PTX groups. All animals in the PTX-FC group died within 4 days after the operation. All animals survived when supplied calcium in the diet. However, serum calcium levels were higher in the PTX-HC than the SHAM group. The PTX-NC group demonstrated the most representative modeling of primary hypothyroidism. Serum calcium levels decreased and phosphorus levels increased, and bone volume was increased. All animals survived without further treatment and did not show nephrotoxicity including calcium deposits. These findings demonstrate that PTX animal models produced by using the fluorescent-identification method, and fed a 0.5% calcium diet, are appropriate for hypoparathyroidism treatment studies.

  5. Electroweak symmetry breaking beyond the Standard Model

    Indian Academy of Sciences (India)

    In this paper, two key issues related to electroweak symmetry breaking are addressed. First, how fine-tuned different models are that trigger this phenomenon? Second, even if a light Higgs boson exists, does it have to be necessarily elementary? After a brief introduction, the fine-tuning aspects of the MSSM, NMSSM, ...

  6. A semi analytical model for short range dispersion from ground sources

    Science.gov (United States)

    Gavze, Ehud; Reichman, Rivka; Fattal, Eyal

    2014-05-01

    A semi-analytical model for dispersion of passive scalars from ground sources up to distances of a few hundred meters is presented. Analytical, or semi-analytical models are useful as they are simple to use and require only a short computation time, compared, for example, to Lagrangian Stochastic Models. As such they are valuable in cases where repeated computations of the concentration field is required, as for example in risk assessments and in the inverse problem of source determination. Among the analytical models the most widely used are the Gaussian models which assume both a uniform wind field and homogeneous turbulence. These assumptions are not valid when ground source is involved since both the wind and the turbulence depend on height. The model proposed here is free of these two assumptions. The formulation of the vertical dispersion is based on approximating the vertical profiles of the wind and the the vertical diffusion coefficient as power laws. One advantage of this approach is that it allows for non Gaussian vertical profiles of the concentration which better fit the experimental data. For the lateral dispersion the model still assumes a Gaussian form. A system of equations was developed to compute the cloud width, taking into account the non-homogeneity of the wind and the turbulence in the vertical direction. The model was tested against two field experiments. Comparison with a Gaussian model showed that it performed much better in predicting both the integrated cross wind ground concentration and the cloud width.

  7. Big bang nucleosynthesis - The standard model and alternatives

    Science.gov (United States)

    Schramm, David N.

    1991-01-01

    The standard homogeneous-isotropic calculation of the big bang cosmological model is reviewed, and alternate models are discussed. The standard model is shown to agree with the light element abundances for He-4, H-2, He-3, and Li-7 that are available. Improved observational data from recent LEP collider and SLC results are discussed. The data agree with the standard model in terms of the number of neutrinos, and provide improved information regarding neutron lifetimes. Alternate models are reviewed which describe different scenarios for decaying matter or quark-hadron induced inhomogeneities. The baryonic density relative to the critical density in the alternate models is similar to that of the standard model when they are made to fit the abundances. This reinforces the conclusion that the baryonic density relative to critical density is about 0.06, and also reinforces the need for both nonbaryonic dark matter and dark baryonic matter.

  8. Analytical solution for two-phase flow in a wellbore using the drift-flux model

    Energy Technology Data Exchange (ETDEWEB)

    Pan, L.; Webb, S.W.; Oldenburg, C.M.

    2011-11-01

    This paper presents analytical solutions for steady-state, compressible two-phase flow through a wellbore under isothermal conditions using the drift flux conceptual model. Although only applicable to highly idealized systems, the analytical solutions are useful for verifying numerical simulation capabilities that can handle much more complicated systems, and can be used in their own right for gaining insight about two-phase flow processes in wells. The analytical solutions are obtained by solving the mixture momentum equation of steady-state, two-phase flow with an assumption that the two phases are immiscible. These analytical solutions describe the steady-state behavior of two-phase flow in the wellbore, including profiles of phase saturation, phase velocities, and pressure gradients, as affected by the total mass flow rate, phase mass fraction, and drift velocity (i.e., the slip between two phases). Close matching between the analytical solutions and numerical solutions for a hypothetical CO{sub 2} leakage problem as well as to field data from a CO{sub 2} production well indicates that the analytical solution is capable of capturing the major features of steady-state two-phase flow through an open wellbore, and that the related assumptions and simplifications are justified for many actual systems. In addition, we demonstrate the utility of the analytical solution to evaluate how the bottomhole pressure in a well in which CO{sub 2} is leaking upward responds to the mass flow rate of CO{sub 2}-water mixture.

  9. Fitting three-level meta-analytic models in R: A step-by-step tutorial

    Directory of Open Access Journals (Sweden)

    Assink, Mark

    2016-10-01

    Full Text Available Applying a multilevel approach to meta-analysis is a strong method for dealing with dependency of effect sizes. However, this method is relatively unknown among researchers and, to date, has not been widely used in meta-analytic research. Therefore, the purpose of this tutorial was to show how a three-level random effects model can be applied to meta-analytic models in R using the rma.mv function of the metafor package. This application is illustrated by taking the reader through a step-by-step guide to the multilevel analyses comprising the steps of (1 organizing a data file; (2 setting up the R environment; (3 calculating an overall effect; (4 examining heterogeneity of within-study variance and between-study variance; (5 performing categorical and continuous moderator analyses; and (6 examining a multiple moderator model. By example, the authors demonstrate how the multilevel approach can be applied to meta-analytically examining the association between mental health disorders of juveniles and juvenile offender recidivism. In our opinion, the rma.mv function of the metafor package provides an easy and flexible way of applying a multi-level structure to meta-analytic models in R. Further, the multilevel meta-analytic models can be easily extended so that the potential moderating influence of variables can be examined.

  10. Analytical modelling for predicting the sound field of planar acoustic metasurface

    Science.gov (United States)

    Zhou, Jie; Zhang, Xin; Fang, Yi

    2018-01-01

    An analytical model is built to predict the acoustic fields of acoustic metasurfaces. The acoustic fields are investigated for a Gaussian sound beam incident on the acoustic metasurfaces. The Gaussian sound beam is decomposed into a set of discrete elementary plane waves. The diffraction caused by the acoustic metasurfaces can be obtained using this analytical model, which is validated with the numerical simulations for the different incident angles of the Gaussian sound beam. This model overcomes the limitation of the method based on the generalised Snell's law which can only predict the direction of a specific diffracted order. Actually, this analytical model can be also used to predict the sound fields of acoustic metasurfaces under any incident sound if its Fourier transforms exist. This conclusion is demonstrated by studying the sound field for a point sound source incident on the acoustic metasurface. The acoustic admittances of acoustic metasurfaces are required in the calculation of the analytical model. Therefore, a numerical method for obtaining the effective acoustic admittances is proposed for the structurally complex metasurfaces without the analytical expressions of material properties, such as equivalent density and sound speed.

  11. Semi-analytical modelling of positive corona discharge in air

    Science.gov (United States)

    Pontiga, Francisco; Yanallah, Khelifa; Chen, Junhong

    2013-09-01

    Semianalytical approximate solutions of the spatial distribution of electric field and electron and ion densities have been obtained by solving Poisson's equations and the continuity equations for the charged species along the Laplacian field lines. The need to iterate for the correct value of space charge on the corona electrode has been eliminated by using the corona current distribution over the grounded plane derived by Deutsch, which predicts a cos m θ law similar to Warburg's law. Based on the results of the approximated model, a parametric study of the influence of gas pressure, the corona wire radius, and the inter-electrode wire-plate separation has been carried out. Also, the approximate solutions of the electron number density has been combined with a simplified plasma chemistry model in order to compute the ozone density generated by the corona discharge in the presence of a gas flow. This work was supported by the Consejeria de Innovacion, Ciencia y Empresa (Junta de Andalucia) and by the Ministerio de Ciencia e Innovacion, Spain, within the European Regional Development Fund contracts FQM-4983 and FIS2011-25161.

  12. Analytical modeling of structure-soil systems for lunar bases

    Science.gov (United States)

    Macari-Pasqualino, Jose Emir

    1989-01-01

    The study of the behavior of granular materials in a reduced gravity environment and under low effective stresses became a subject of great interest in the mid 1960's when NASA's Surveyor missions to the Moon began the first extraterrestrial investigation and it was found that Lunar soils exhibited properties quite unlike those on Earth. This subject gained interest during the years of the Apollo missions and more recently due to NASA's plans for future exploration and colonization of Moon and Mars. It has since been clear that a good understanding of the mechanical properties of granular materials under reduced gravity and at low effective stress levels is of paramount importance for the design and construction of surface and buried structures on these bodies. In order to achieve such an understanding it is desirable to develop a set of constitutive equations that describes the response of such materials as they are subjected to tractions and displacements. This presentation examines issues associated with conducting experiments on highly nonlinear granular materials under high and low effective stresses. The friction and dilatancy properties which affect the behavior of granular soils with low cohesion values are assessed. In order to simulate the highly nonlinear strength and stress-strain behavior of soils at low as well as high effective stresses, a versatile isotropic, pressure sensitive, third stress invariant dependent, cone-cap elasto-plastic constitutive model was proposed. The integration of the constitutive relations is performed via a fully implicit Backward Euler technique known as the Closest Point Projection Method. The model was implemented into a finite element code in order to study nonlinear boundary value problems associated with homogeneous as well as nonhomogeneous deformations at low as well as high effective stresses. The effect of gravity (self-weight) on the stress-strain-strength response of these materials is evaluated. The calibration

  13. Role of sediment-trace element chemistry in water-quality monitoring and the need for standard analytical methods

    Science.gov (United States)

    Horowitz, Arthur J.

    1991-01-01

    Multiple linear regression models calculated from readily obtainable chemical and physical parameters can explain a high percentage (70% or greater) of observed sediment trace-element variance for Cu, Zn, Pb, Cr, Ni, Co, As, Sb, Se, and Hg. Almost all the factors used in the various models fall into the category of operational definitions (e.g., grain size, surface area, and geochemical substrates such as amorphous iron and manganese oxides). Thus, the concentrations and distributions used in the various models are operationally defined, and are subject to substantial change depending on the method used to determine them. Without standardized procedures, data from different sources are not comparable, and the utility and applicability of the various models would be questionable.

  14. An Assessment Model of National Grants of University Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Xia Yang

    2016-01-01

    Full Text Available How to assess kinds of grants scientifically, effectively and regularly is an important topic for the funding workers to study. According to the national grants’ basic conditions, an assessment model is established on the basis of fuzzy analytic hierarchy process. And Finally an example is given to illustrate the scientificalness and operability of this model.

  15. A Two-Dimensional Analytic Thermal Model for a High-Speed PMSM Magnet

    CSIR Research Space (South Africa)

    Grobler, AJ

    2015-11-01

    Full Text Available . The temperature-dependent properties of permanent magnets necessitate high-detail thermal models. This paper presents a 2-D analytical model for a HS PMSM magnet. The diffusion equation is solved where three of the PM boundaries experience convection heat flow...

  16. Improved Analytical Model of a Permanent-Magnet Brushless DC Motor

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    In this paper, we develop a comprehensive model of a permanent-magnet brushless DC (BLDC) motor. An analytical model for determining instantaneous air-gap field density is developed. This instantaneous field distribution can be further used to determine the cogging torque, induced back electromotive

  17. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    Science.gov (United States)

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  18. An analytical model for the performance of geographical multi-hop broadcast

    NARCIS (Netherlands)

    Klein Wolterink, W.; Heijenk, G.; Berg, J.L. van den

    2012-01-01

    In this paper we present an analytical model accurately describing the behaviour of a multi-hop broadcast protocol. Our model covers the scenario in which a message is forwarded over a straight road and inter-node distances are distributed exponentially. Intermediate forwarders draw a small random

  19. Analytical model for double split ring resonators with arbitrary ring width

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Jensen, Thomas; Krozer, Viktor

    2008-01-01

    For the first time, the analytical model for a double split ring resonator with unequal width rings is developed. The proposed models for the resonators with equal and unequal widths are based on an impedance matrix representation and provide the prediction of performance in a wide frequency range...

  20. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    Science.gov (United States)

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  1. An improved analytical model for carrier multiplication near breakdown in diodes

    NARCIS (Netherlands)

    Hueting, Raymond Josephus Engelbart; Heringa, Anco; Boksteen, B.K.; Dutta, Satadal; Ferrara, A.; Agarwal, Vishal Vishal; Annema, Anne J.

    2017-01-01

    The charge carrier contributions to impact ionization and avalanche multiplication are analyzed in detail. A closed-form analytical model is derived for the ionization current before the onset of breakdown induced by both injection current components. This model shows that the ratio of both

  2. A structurally based analytic model of growth and biomass dynamics in single species stands of conifers

    Science.gov (United States)

    Robin J. Tausch

    2015-01-01

    A theoretically based analytic model of plant growth in single species conifer communities based on the species fully occupying a site and fully using the site resources is introduced. Model derivations result in a single equation simultaneously describes changes over both, different site conditions (or resources available), and over time for each variable for each...

  3. Analytic model utilizing the complex ABCD method for range dependency of a monostatic coherent lidar

    DEFF Research Database (Denmark)

    Olesen, Anders Sig; Pedersen, Anders Tegtmeier; Hanson, Steen Grüner

    2014-01-01

    In this work, we present an analytic model for analyzing the range and frequency dependency of a monostatic coherent lidar measuring velocities of a diffuse target. The model of the signal power spectrum includes both the contribution from the optical system as well as the contribution from the t...

  4. Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique

    Directory of Open Access Journals (Sweden)

    Robert Newell

    2014-03-01

    Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221

  5. Analytical modelling and experimental studies of SIS tunnel solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Cheknane, Ali [Laboratoire de Valorisation des Energies Renouvelables et Environnements Agressifs, Universite Amar Telidji de Laghouat, BP 37G route de Ghardaia, Laghouat (03000) Algerie (Algeria)], E-mail: cheknanali@yahoo.com

    2009-06-07

    This paper presents an experimental and computational study of semiconductor-insulator-semiconductor (SIS) tunnel solar cells. A transparent and conductive film of thallium trioxide Tl{sub 2}O{sub 3} has been deposited by anodic oxidation onto an n-Si(1 0 0) face to realize the SIS tunnel solar cells based on Si/SiO{sub x}/Tl{sub 2}O{sub 3}. An efficiency of 8.77% has been obtained under an incident power density of 33 mW cm{sup -2} illumination condition. A PSPICE model is implemented. The calculated results show that the theoretical values are in good agreement with experimental data. Moreover, the simulation clearly demonstrates that the performance of the tested device can be significantly improved.

  6. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    Science.gov (United States)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at

  7. Analytical model for three-dimensional Mercedes-Benz water molecules

    OpenAIRE

    Urbic, T.

    2012-01-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored...

  8. Standard State Space Models of Unawareness (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Peter Fritz

    2016-06-01

    Full Text Available The impossibility theorem of Dekel, Lipman and Rustichini has been thought to demonstrate that standard state-space models cannot be used to represent unawareness. We first show that Dekel, Lipman and Rustichini do not establish this claim. We then distinguish three notions of awareness, and argue that although one of them may not be adequately modeled using standard state spaces, there is no reason to think that standard state spaces cannot provide models of the other two notions. In fact, standard space models of these forms of awareness are attractively simple. They allow us to prove completeness and decidability results with ease, to carry over standard techniques from decision theory, and to add propositional quantifiers straightforwardly.

  9. Analytical modeling of equilibrium of strongly anisotropic plasma in tokamaks and stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Lepikhin, N. D.; Pustovitov, V. D., E-mail: pustovit@nfi.kiae.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2013-08-15

    Theoretical analysis of equilibrium of anisotropic plasma in tokamaks and stellarators is presented. The anisotropy is assumed strong, which includes the cases with essentially nonuniform distributions of plasma pressure on magnetic surfaces. Such distributions can arise at neutral beam injection or at ion cyclotron resonance heating. Then the known generalizations of the standard theory of plasma equilibrium that treat p{sub ‖} and p{sub ⊥} (parallel and perpendicular plasma pressures) as almost constant on magnetic surfaces are not applicable anymore. Explicit analytical prescriptions of the profiles of p{sub ‖} and p{sub ⊥} are proposed that allow modeling of the anisotropic plasma equilibrium even with large ratios of p{sub ‖}/p{sub ⊥} or p{sub ⊥}/p{sub ‖}. A method for deriving the equation for the Shafranov shift is proposed that does not require introduction of the flux coordinates and calculation of the metric tensor. It is shown that for p{sub ⊥} with nonuniformity described by a single poloidal harmonic, the equation for the Shafranov shift coincides with a known one derived earlier for almost constant p{sub ⊥} on a magnetic surface. This does not happen in the other more complex case.

  10. Measurement strategy and analytic model to determine firing pin force

    Science.gov (United States)

    Lesenciuc, Ioan; Suciu, Cornel

    2016-12-01

    As illustrated in literature, ballistics is a branch of theoretical mechanics, which studies the construction and working principles of firearms and ammunition, their effects, as well as the motions of projectiles and bullets1. Criminalistics identification, as part of judiciary identification represents an activity aimed at finding common traits of different objects, objectives, phenomena and beings, but more importantly, traits that differentiate each of them from similar ones2-4. In judicial ballistics, in the case of rifled firearms it is relatively simple for experts to identify the used weapon from traces left on the projectile, as the rifling of the barrel leaves imprints on the bullet, which remain approximately identical even after the respective weapon is fired 100 times with the same barrel. However, in the case of smoothbore firearms, their identification becomes much more complicated. As the firing cap suffers alterations from being hit by the firing pin, determination of the force generated during impact creates the premises for determining the type of firearm used to shoot the respective cartridge. The present paper proposes a simple impact model that can be used to evaluate the force generated by the firing pin during its impact with the firing cap. The present research clearly showed that each rifle, by the combination of the three investigated parameters (impact force maximum value, its variation diagram, and impact time) leave a unique trace. Application of such a method in ballistics can create the perspectives for formulating clear conclusions that eliminate possible judicial errors in this field.

  11. Analytical Modeling Tool for Design of Hydrocarbon Sensitive Optical Fibers

    Directory of Open Access Journals (Sweden)

    Khalil Al Handawi

    2017-09-01

    Full Text Available Pipelines are the main transportation means for oil and gas products across large distances. Due to the severe conditions they operate in, they are regularly inspected using conventional Pipeline Inspection Gages (PIGs for corrosion damage. The motivation for researching a real-time distributed monitoring solution arose to mitigate costs and provide a proactive indication of potential failures. Fiber optic sensors with polymer claddings provide a means of detecting contact with hydrocarbons. By coating the fibers with a layer of metal similar in composition to that of the parent pipeline, corrosion of this coating may be detected when the polymer cladding underneath is exposed to the surrounding hydrocarbons contained within the pipeline. A Refractive Index (RI change occurs in the polymer cladding causing a loss in intensity of a traveling light pulse due to a reduction in the fiber’s modal capacity. Intensity losses may be detected using Optical Time Domain Reflectometry (OTDR while pinpointing the spatial location of the contact via time delay calculations of the back-scattered pulses. This work presents a theoretical model for the above sensing solution to provide a design tool for the fiber optic cable in the context of hydrocarbon sensing following corrosion of an external metal coating. Results are verified against the experimental data published in the literature.

  12. A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect

    International Nuclear Information System (INIS)

    Khater, Antoine; Szczesniak, Dominik

    2011-01-01

    An analytical model is presented for the electronic conductance in a one dimensional atomic chain across an isolated defect. The model system consists of two semi infinite lead atomic chains with the defect atom making the junction between the two leads. The calculation is based on a linear combination of atomic orbitals in the tight-binding approximation, with a single atomic one s-like orbital chosen in the present case. The matching method is used to derive analytical expressions for the scattering cross sections for the reflection and transmission processes across the defect, in the Landauer-Buttiker representation. These analytical results verify the known limits for an infinite atomic chain with no defects. The model can be applied numerically for one dimensional atomic systems supported by appropriate templates. It is also of interest since it would help establish efficient procedures for ensemble averages over a field of impurity configurations in real physical systems.

  13. An analytically resolved model of a potato's thermal processing using Heun functions

    Science.gov (United States)

    Vargas Toro, Agustín.

    2014-05-01

    A potato's thermal processing model is solved analytically. The model is formulated using the equation of heat diffusion in the case of a spherical potato processed in a furnace, and assuming that the potato's thermal conductivity is radially modulated. The model is solved using the method of the Laplace transform, applying Bromwich Integral and Residue Theorem. The temperatures' profile in the potato is presented as an infinite series of Heun functions. All computations are performed with computer algebra software, specifically Maple. Using the numerical values of the thermal parameters of the potato and geometric and thermal parameters of the processing furnace, the time evolution of the temperatures in different regions inside the potato are presented analytically and graphically. The duration of thermal processing in order to achieve a specified effect on the potato is computed. It is expected that the obtained analytical results will be important in food engineering and cooking engineering.

  14. A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect

    Energy Technology Data Exchange (ETDEWEB)

    Khater, Antoine; Szczesniak, Dominik [Laboratoire de Physique de l' Etat Condense UMR 6087, Universite du Maine, 72085 Le Mans (France)

    2011-04-01

    An analytical model is presented for the electronic conductance in a one dimensional atomic chain across an isolated defect. The model system consists of two semi infinite lead atomic chains with the defect atom making the junction between the two leads. The calculation is based on a linear combination of atomic orbitals in the tight-binding approximation, with a single atomic one s-like orbital chosen in the present case. The matching method is used to derive analytical expressions for the scattering cross sections for the reflection and transmission processes across the defect, in the Landauer-Buttiker representation. These analytical results verify the known limits for an infinite atomic chain with no defects. The model can be applied numerically for one dimensional atomic systems supported by appropriate templates. It is also of interest since it would help establish efficient procedures for ensemble averages over a field of impurity configurations in real physical systems.

  15. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity

    Science.gov (United States)

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M.

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  16. VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2017-03-01

    Full Text Available Effective analysis of large text collections remains a challenging problem given the growing volume of available text data. Recently, text mining techniques have been rapidly developed for automatically extracting key information from massive text data. Topic modeling, as one of the novel techniques that extracts a thematic structure from documents, is widely used to generate text summarization and foster an overall understanding of the corpus content. Although powerful, this technique may not be directly applicable for general analytics scenarios since the topics and topic–document relationship are often presented probabilistically in models. Moreover, information that plays an important role in knowledge discovery, for example, times and authors, is hardly reflected in topic modeling for comprehensive analysis. In this paper, we address this issue by presenting a visual analytics system, VISTopic, to help users make sense of large document collections based on topic modeling. VISTopic first extracts a set of hierarchical topics using a novel hierarchical latent tree model (HLTM (Liu et al., 2014. In specific, a topic view accounting for the model features is designed for overall understanding and interactive exploration of the topic organization. To leverage multi-perspective information for visual analytics, VISTopic further provides an evolution view to reveal the trend of topics and a document view to show details of topical documents. Three case studies based on the dataset of IEEE VIS conference demonstrate the effectiveness of our system in gaining insights from large document collections. Keywords: Topic-modeling, Text visualization, Visual analytics

  17. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    Science.gov (United States)

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  18. Comparison of a semi-analytic and a CFD model uranium combustion to experimental data

    International Nuclear Information System (INIS)

    Clarksean, R.

    1998-01-01

    Two numerical models were developed and compared for the analysis of uranium combustion and ignition in a furnace. Both a semi-analytical solution and a computational fluid dynamics (CFD) numerical solution were obtained. Prediction of uranium oxidation rates is important for fuel storage applications, fuel processing, and the development of spent fuel metal waste forms. The semi-analytical model was based on heat transfer correlations, a semi-analytical model of flow over a flat surface, and simple radiative heat transfer from the material surface. The CFD model numerically determined the flowfield over the object of interest, calculated the heat and mass transfer to the material of interest, and calculated the radiative heat exchange of the material with the furnace. The semi-analytical model is much less detailed than the CFD model, but yields reasonable results and assists in understanding the physical process. Short computation times allowed the analyst to study numerous scenarios. The CFD model had significantly longer run times, was found to have some physical limitations that were not easily modified, but was better able to yield details of the heat and mass transfer and flow field once code limitations were overcome

  19. Determination of detonation products equation of state from cylinder test: Analytical model and numerical analysis

    Directory of Open Access Journals (Sweden)

    Elek Predrag M.

    2015-01-01

    Full Text Available Contemporary research in the field of explosive applications implies utilization of hydrocode simulations. Validity of these simulations strongly depends on parameters used in the equation of state for high explosives considered. A new analytical model for determination of Jones-Wilkins-Lee (JWL equation of state parameters based on the cylinder test is proposed. The model relies on analysis of the metal cylinder expansion by detonation products. Available cylinder test data for five high explosives are used for the calculation of JWL parameters. Good agreement between results of the model and the literature data is observed, justifying the suggested analytical approach. Numerical finite element model of the cylinder test is created in Abaqus in order to validate the proposed model. Using the analytical model results as the input, it was shown that numerical simulation of the cylinder test accurately reproduces experimental results for all considered high explosives. Therefore, both the analytical method for calculation of JWL equation of state parameters and numerical Abaqus model of the cylinder test are validated. [Projekat Ministartsva nauke Republike Srbije, br. III-47029

  20. Physics Beyond the Standard Model: Supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, M.M.; /KEK, Tsukuba /Tsukuba, Graduate U. Adv. Studies /Tokyo U.; Plehn, T.; /Edinburgh U.; Polesello, G.; /INFN, Pavia; Alexander, John M.; /Edinburgh U.; Allanach, B.C.; /Cambridge U.; Barr, Alan J.; /Oxford U.; Benakli, K.; /Paris U., VI-VII; Boudjema, F.; /Annecy, LAPTH; Freitas, A.; /Zurich U.; Gwenlan, C.; /University Coll. London; Jager, S.; /CERN /LPSC, Grenoble

    2008-02-01

    This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.

  1. A Semi-Analytical Model for Short Range Dispersion From Ground Sources

    Science.gov (United States)

    Gavze, E.; Fattal, E.; Reichman, R.

    2014-12-01

    A semi-analytical model for dispersion of passive scalars from ground sources up to distances of a few hundred meters is presented. Most widely used analytical models are Gaussian models which assume both a uniform wind field and homogeneous turbulence. These assumptions are not valid when ground sources are involved since both the wind and the turbulence depend on height. The model presented here is free of these two assumptions. The formulation of the vertical dispersion is based on approximating the vertical profiles of the wind and the the vertical diffusion coefficient, based on Monin Obukhov Similarity Theory, as power laws. One advantage of this approach is that it allows for non Gaussian vertical profiles of the concentration which better fit the experimental data. For the lateral dispersion the model still assumes a Gaussian form. A system of equations was developed to compute the cloud width. This system of equations is based on an analytical solution of a Langevin equation which takes into account the non-homogeneity of the wind and the turbulence in the vertical direction. The model was tested against two field experiments. Comparison with a Gaussian model showed that it performed much better in predicting both the integrated cross wind ground concentration and the cloud width. Analytical, or semi-analytical models are useful as they are simple to use and require only a short computation time, compared, for example, to Lagrangian Stochastic Models. The presented model is very efficient from the computational point of view. As such it is suitable for cases in which repeated computations of the concentration field are required, as for example in risk assessments and in the inverse problem of source determination.

  2. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    Science.gov (United States)

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or

  3. ATLAS Searches for Beyond the Standard Model Higgs Bosons

    CERN Document Server

    Potter, C T

    2013-01-01

    The present status of ATLAS searches for Higgs bosons in extensions of the Standard Model (SM) is presented. This includes searches for the Higgs bosons of the Two-Higgs-Doublet Model (2HDM), the Minimal Supersymmetric Model (MSSM), the Next-to-Minimal Supersymmetric Model (NMSSM) and models with an invisibly decaying Higgs boson. A review of the phenomenology of the Higgs sectors of these models is given together with the search strategy and the resulting experimental constraints.

  4. SPICE compatible analytical electron mobility model for biaxial strained-Si-MOSFETs

    International Nuclear Information System (INIS)

    Chaudhry, Amit; Sangwan, S.; Roy, J. N.

    2011-01-01

    This paper describes an analytical model for bulk electron mobility in strained-Si layers as a function of strain. Phonon scattering, columbic scattering and surface roughness scattering are included to analyze the full mobility model. Analytical explicit calculations of all of the parameters to accurately estimate the electron mobility have been made. The results predict an increase in the electron mobility with the application of biaxial strain as also predicted from the basic theory of strain physics of metal oxide semiconductor (MOS) devices. The results have also been compared with numerically reported results and show good agreement. (semiconductor devices)

  5. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  6. Blended Learning Analytics Model for Evaluation (BLAME). Et case-studie af universitetsunderviseres brug af Blackboard

    DEFF Research Database (Denmark)

    Musaeus, Peter; Bennedsen, Andreas Brændstrup; Hansen, Janne Saltoft

    2015-01-01

    I denne artikel vil vi præsentere en strategi til inddragelse af læringsanalytik (learning analytics) ved evaluering af universitetsunderviseres brug af et nyt LMS på Aarhus Universitet: Blackboard. Vi diskuterer en model (BLAME: Blended Learning Analytics Model of Evaluation) for, hvordan...... kategorisering af kurser og data om læringsanalytik indsamlet på Blackboard kan integreres. Endvidere belyser vi, hvilke implikationer en sådan læringsanalytik kan have for blended learning ved at analysere to forskellige uddannelses-cases/illustrationer. Dernæst diskuterer vi pædagogisk udvikling i forbindelse...

  7. The Standard Model is Natural as Magnetic Gauge Theory

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2011-01-01

    matter. The absence of scalars in the electric theory indicates that the associated magnetic theory is free from quadratic divergences. Our novel solution to the Standard Model hierarchy problem leads also to a new insight on the mystery of the observed number of fundamental fermion generations......We suggest that the Standard Model can be viewed as the magnetic dual of a gauge theory featuring only fermionic matter content. We show this by first introducing a Pati-Salam like extension of the Standard Model and then relating it to a possible dual electric theory featuring only fermionic...

  8. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  9. Transient vibration analytical modeling and suppressing for vibration absorber system under impulse excitation

    Science.gov (United States)

    Wang, Xi; Yang, Bintang; Yu, Hu; Gao, Yulong

    2017-04-01

    The impulse excitation of mechanism causes transient vibration. In order to achieve adaptive transient vibration control, a method which can exactly model the response need to be proposed. This paper presents an analytical model to obtain the response of the primary system attached with dynamic vibration absorber (DVA) under impulse excitation. The impulse excitation which can be divided into single-impulse excitation and multi-impulse excitation is simplified as sinusoidal wave to establish the analytical model. To decouple the differential governing equations, a transform matrix is applied to convert the response from the physical coordinate to model coordinate. Therefore, the analytical response in the physical coordinate can be obtained by inverse transformation. The numerical Runge-Kutta method and experimental tests have demonstrated the effectiveness of the analytical model proposed. The wavelet of the response indicates that the transient vibration consists of components with multiple frequencies, and it shows that the modeling results coincide with the experiments. The optimizing simulations based on genetic algorithm and experimental tests demonstrate that the transient vibration of the primary system can be decreased by changing the stiffness of the DVA. The results presented in this paper are the foundations for us to develop the adaptive transient vibration absorber in the future.

  10. On Improving Analytical Models of Cosmic Reionization for Matching Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kaurov, Alexander A. [Univ. of Chicago, IL (United States)

    2016-01-01

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.

  11. [Studies on the identification of psychotropic substances. VIII. Preparation and various analytical data of reference standard of some stimulants, amfepramone, cathinone, N-ethylamphetamine, fenethylline, fenproporex and mefenorex].

    Science.gov (United States)

    Shimamine, M; Takahashi, K; Nakahara, Y

    1992-01-01

    The Reference Standards for amfepramone, cathinone, N-ethylamphetamine, fenethylline, fenproporex and mefenorex were prepared. Their purities determined by HPLC were more than 99.5%. For the identification and determination of these six drugs, their analytical data were measured and discussed by TLC, UV, IR, HPLC, GC/MS and NMR.

  12. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  13. Analytic expressions for the construction of a fire event PSA model

    International Nuclear Information System (INIS)

    Kang, Dae Il; Kim, Kil Yoo; Kim, Dong San; Hwang, Mee Jeong; Yang, Joon Eon

    2016-01-01

    In this study, the changing process of an internal event PSA model to a fire event PSA model is analytically presented and discussed. Many fire PSA models have fire induced initiating event fault trees not shown in an internal event PSA model. Fire-induced initiating fault tree models are developed for addressing multiple initiating event issues. A single fire event within a fire compartment or fire scenario can cause multiple initiating events. As an example, a fire in a turbine building area can cause a loss of the main feed-water and loss of off-site power initiating events. Up to now, there has been no analytic study on the construction of a fire event PSA model using an internal event PSA model with fault trees of initiating events. In this paper, the changing process of an internal event PSA model to a fire event PSA model was analytically presented and discussed. This study results show that additional cutsets can be obtained if the fault trees of initiating events for a fire event PSA model are not exactly developed.

  14. Analytical recovery of protozoan enumeration methods: have drinking water QMRA models corrected or created bias?

    Science.gov (United States)

    Schmidt, P J; Emelko, M B; Thompson, M E

    2013-05-01

    Quantitative microbial risk assessment (QMRA) is a tool to evaluate the potential implications of pathogens in a water supply or other media and is of increasing interest to regulators. In the case of potentially pathogenic protozoa (e.g. Cryptosporidium oocysts and Giardia cysts), it is well known that the methods used to enumerate (oo)cysts in samples of water and other media can have low and highly variable analytical recovery. In these applications, QMRA has evolved from ignoring analytical recovery to addressing it in point-estimates of risk, and then to addressing variation of analytical recovery in Monte Carlo risk assessments. Often, variation of analytical recovery is addressed in exposure assessment by dividing concentration values that were obtained without consideration of analytical recovery by random beta-distributed recovery values. A simple mathematical proof is provided to demonstrate that this conventional approach to address non-constant analytical recovery in drinking water QMRA will lead to overestimation of mean pathogen concentrations. The bias, which can exceed an order of magnitude, is greatest when low analytical recovery values are common. A simulated dataset is analyzed using a diverse set of approaches to obtain distributions representing temporal variation in the oocyst concentration, and mean annual risk is then computed from each concentration distribution using a simple risk model. This illustrative example demonstrates that the bias associated with mishandling non-constant analytical recovery and non-detect samples can cause drinking water systems to be erroneously classified as surpassing risk thresholds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Analytical Model of the Nonlinear Dynamics of Cantilever Tip-Sample Surface Interactions for Various Acoustic-Atomic Force Microscopies

    Science.gov (United States)

    Cantrell, John H., Jr.; Cantrell, Sean A.

    2008-01-01

    A comprehensive analytical model of the interaction of the cantilever tip of the atomic force microscope (AFM) with the sample surface is developed that accounts for the nonlinearity of the tip-surface interaction force. The interaction is modeled as a nonlinear spring coupled at opposite ends to linear springs representing cantilever and sample surface oscillators. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a standard iteration procedure. Solutions are obtained for the phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) techniques including force modulation microscopy, atomic force acoustic microscopy, ultrasonic force microscopy, heterodyne force microscopy, resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM), and the commonly used intermittent contact mode (TappingMode) generally available on AFMs. The solutions are used to obtain a quantitative measure of image contrast resulting from variations in the Young modulus of the sample for the amplitude and phase images generated by the A-AFM techniques. Application of the model to RDF-AFUM and intermittent soft contact phase images of LaRC-cp2 polyimide polymer is discussed. The model predicts variations in the Young modulus of the material of 24 percent from the RDF-AFUM image and 18 percent from the intermittent soft contact image. Both predictions are in good agreement with the literature value of 21 percent obtained from independent, macroscopic measurements of sheet polymer material.

  16. Analytical validation of a standardized scoring protocol for Ki67: phase 3 of an international multicenter collaboration

    Science.gov (United States)

    Leung, Samuel C Y; Nielsen, Torsten O; Zabaglo, Lila; Arun, Indu; Badve, Sunil S; Bane, Anita L; Bartlett, John M S; Borgquist, Signe; Chang, Martin C; Dodson, Andrew; Enos, Rebecca A; Fineberg, Susan; Focke, Cornelia M; Gao, Dongxia; Gown, Allen M; Grabau, Dorthe; Gutierrez, Carolina; Hugh, Judith C; Kos, Zuzana; Lænkholm, Anne-Vibeke; Lin, Ming-Gang; Mastropasqua, Mauro G; Moriya, Takuya; Nofech-Mozes, Sharon; Osborne, C Kent; Penault-Llorca, Frédérique M; Piper, Tammy; Sakatani, Takashi; Salgado, Roberto; Starczynski, Jane; Viale, Giuseppe; Hayes, Daniel F; McShane, Lisa M; Dowsett, Mitch

    2016-01-01

    Pathological analysis of the nuclear proliferation biomarker Ki67 has multiple potential roles in breast and other cancers. However, clinical utility of the immunohistochemical (IHC) assay for Ki67 immunohistochemistry has been hampered by unacceptable between-laboratory analytical variability. The International Ki67 Working Group has conducted a series of studies aiming to decrease this variability and improve the evaluation of Ki67. This study tries to assess whether acceptable performance can be achieved on prestained core-cut biopsies using a standardized scoring method. Sections from 30 primary ER+ breast cancer core biopsies were centrally stained for Ki67 and circulated among 22 laboratories in 11 countries. Each laboratory scored Ki67 using three methods: (1) global (4 fields of 100 cells each); (2) weighted global (same as global but weighted by estimated percentages of total area); and (3) hot-spot (single field of 500 cells). The intraclass correlation coefficient (ICC), a measure of interlaboratory agreement, for the unweighted global method (0.87; 95% credible interval (CI): 0.81–0.93) met the prespecified success criterion for scoring reproducibility, whereas that for the weighted global (0.87; 95% CI: 0.7999–0.93) and hot-spot methods (0.84; 95% CI: 0.77–0.92) marginally failed to do so. The unweighted global assessment of Ki67 IHC analysis on core biopsies met the prespecified criterion of success for scoring reproducibility. A few cases still showed large scoring discrepancies. Establishment of external quality assessment schemes is likely to improve the agreement between laboratories further. Additional evaluations are needed to assess staining variability and clinical validity in appropriate cohorts of samples. PMID:28721378

  17. Analytical modelling of stable isotope fractionation of volatile organic compounds in the unsaturated zone

    OpenAIRE

    Bouchard, D.; Cornaton, F.; Höhener, P.; Hunkeler, D.

    2011-01-01

    Analytical models were developed that simulate stable isotope ratios of volatile organic compounds (VOCs) near a point source contamination in the unsaturated zone. The models describe diffusive transport of VOCs, biodegradation and source ageing. The mass transport is governed by Fick's law for diffusion. The equation for reactive transport of VOCs in the soil gas phase was solved for different source geometries and for different boundary conditions. Model results were compared to experiment...

  18. The Beyond the Standard Model Working Group: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Thomas G.

    2002-08-08

    Various theoretical aspects of physics beyond the Standard Model at hadron colliders are discussed. Our focus will be on those issues that most immediately impact the projects pursued as part of the BSM group at this meeting.

  19. Workshop on What Comes Beyond the Standard Model?

    CERN Document Server

    Borstnik, N M; Nielsen, Holger Bech; Froggatt, Colin D; What Comes Beyond the Standard Model?

    1999-01-01

    The Proceedings collects the results of ten days of discussions on the open questions of the Standard electroweak model as well as the review of the introductory talks, connected with the discussions.

  20. Modern elementary particle physics explaining and extending the standard model

    CERN Document Server

    Kane, Gordon

    2017-01-01

    This book is written for students and scientists wanting to learn about the Standard Model of particle physics. Only an introductory course knowledge about quantum theory is needed. The text provides a pedagogical description of the theory, and incorporates the recent Higgs boson and top quark discoveries. With its clear and engaging style, this new edition retains its essential simplicity. Long and detailed calculations are replaced by simple approximate ones. It includes introductions to accelerators, colliders, and detectors, and several main experimental tests of the Standard Model are explained. Descriptions of some well-motivated extensions of the Standard Model prepare the reader for new developments. It emphasizes the concepts of gauge theories and Higgs physics, electroweak unification and symmetry breaking, and how force strengths vary with energy, providing a solid foundation for those working in the field, and for those who simply want to learn about the Standard Model.

  1. Tests of the standard electroweak model in beta decay

    Energy Technology Data Exchange (ETDEWEB)

    Severijns, N.; Beck, M. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium); Naviliat-Cuncic, O. [Caen Univ., CNRS-ENSI, 14 (France). Lab. de Physique Corpusculaire

    2006-05-15

    We review the current status of precision measurements in allowed nuclear beta decay, including neutron decay, with emphasis on their potential to look for new physics beyond the standard electroweak model. The experimental results are interpreted in the framework of phenomenological model-independent descriptions of nuclear beta decay as well as in some specific extensions of the standard model. The values of the standard couplings and the constraints on the exotic couplings of the general beta decay Hamiltonian are updated. For the ratio between the axial and the vector couplings we obtain C{sub A},/C{sub V} = -1.26992(69) under the standard model assumptions. Particular attention is devoted to the discussion of the sensitivity and complementarity of different precision experiments in direct beta decay. The prospects and the impact of recent developments of precision tools and of high intensity low energy beams are also addressed. (author)

  2. Standard model status (in search of ''new physics'')

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1993-03-01

    A perspective on successes and shortcomings of the standard model is given. The complementarity between direct high energy probes of new physics and lower energy searches via precision measurements and rare reactions is described. Several illustrative examples are discussed

  3. CP violation and electroweak baryogenesis in the Standard Model

    Directory of Open Access Journals (Sweden)

    Brauner Tomáš

    2014-04-01

    Full Text Available One of the major unresolved problems in current physics is understanding the origin of the observed asymmetry between matter and antimatter in the Universe. It has become a common lore to claim that the Standard Model of particle physics cannot produce sufficient asymmetry to explain the observation. Our results suggest that this conclusion can be alleviated in the so-called cold electroweak baryogenesis scenario. On the Standard Model side, we continue the program initiated by Smit eight years ago; one derives the effective CP-violating action for the Standard Model bosons and uses the resulting effective theory in numerical simulations. We address a disagreement between two previous computations performed effectively at zero temperature, and demonstrate that it is very important to include temperature effects properly. Our conclusion is that the cold electroweak baryogenesis scenario within the Standard Model is tightly constrained, yet producing enough baryon asymmetry using just known physics still seems possible.

  4. Overview of the Higgs and Standard Model physics at ATLAS

    CERN Document Server

    Vazquez Schroeder, Tamara; The ATLAS collaboration

    2018-01-01

    This talk presents selected aspects of recent physics results from the ATLAS collaboration in the Standard Model and Higgs sectors, with a focus on the recent evidence for the associated production of the Higgs boson and a top quark pair.

  5. Enhancements to ASHRAE Standard 90.1 Prototype Building Models

    Energy Technology Data Exchange (ETDEWEB)

    Goel, Supriya; Athalye, Rahul A.; Wang, Weimin; Zhang, Jian; Rosenberg, Michael I.; Xie, YuLong; Hart, Philip R.; Mendon, Vrushali V.

    2014-04-16

    This report focuses on enhancements to prototype building models used to determine the energy impact of various versions of ANSI/ASHRAE/IES Standard 90.1. Since the last publication of the prototype building models, PNNL has made numerous enhancements to the original prototype models compliant with the 2004, 2007, and 2010 editions of Standard 90.1. Those enhancements are described here and were made for several reasons: (1) to change or improve prototype design assumptions; (2) to improve the simulation accuracy; (3) to improve the simulation infrastructure; and (4) to add additional detail to the models needed to capture certain energy impacts from Standard 90.1 improvements. These enhancements impact simulated prototype energy use, and consequently impact the savings estimated from edition to edition of Standard 90.1.

  6. Almost-commutative geometries beyond the standard model

    International Nuclear Information System (INIS)

    Stephan, Christoph A

    2006-01-01

    In Iochum et al (2004 J. Math. Phys. 45 5003), Jureit and Stephan (2005 J. Math. Phys. 46 043512), Schuecker T (2005 Preprint hep-th/0501181) and Jureit et al (2005 J. Math. Phys. 46 072303), a conjecture is presented that almost-commutative geometries, with respect to sensible physical constraints, allow only the standard model of particle physics and electro-strong models as Yang-Mills-Higgs theories. In this paper, a counter-example will be given. The corresponding almost-commutative geometry leads to a Yang-Mills-Higgs model which consists of the standard model of particle physics and two new fermions of opposite electro-magnetic charge. This is the second Yang-Mills-Higgs model within noncommutative geometry, after the standard model, which could be compatible with experiments. Combined to a hydrogen-like composite particle, these new particles provide a novel dark matter candidate

  7. Standard Model Higgs boson searches with the ATLAS detector at ...

    Indian Academy of Sciences (India)

    experimental results on the search of the Standard Model Higgs boson with 1 to 2 fb. −1 of proton– ... expectations from Standard Model processes, and the production of a Higgs boson is excluded at 95% Confidence Level for the mass ... lνlν and H → Z Z. (∗) → 4l,llνν as they play important roles in setting the overall result.

  8. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  9. Neutrinos from the Early Universe and physics beyond standard models

    Directory of Open Access Journals (Sweden)

    Kirilova Daniela

    2015-01-01

    Full Text Available Neutrino oscillations present the only robust example of experimentally detected physics beyond the standard model. This review discusses the established and several hypothetical beyond standard models neutrino characteristics and their cosmological effects and constraints. Particularly, the contemporary cosmological constraints on the number of neutrino families, neutrino mass differences and mixing, lepton asymmetry in the neutrino sector, neutrino masses, light sterile neutrino are briefly reviewed.

  10. The Standard Model from LHC to future colliders

    Energy Technology Data Exchange (ETDEWEB)

    Forte, S.; Ferrera, G.; Vicini, A. [Universita di Milano, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano, Milan (Italy); Nisati, A. [INFN, Sezione di Roma, Rome (Italy); Passarino, G.; Magnea, L. [Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino, Turin (Italy); Tenchini, R. [INFN, Sezione di Pisa, Pisa (Italy); Calame, C.M.C. [Universita di Pavia, Dipartimento di Fisica, Pavia (Italy); Chiesa, M.; Nicrosini, O.; Piccinini, F. [INFN, Sezione di Pavia, Pavia (Italy); Cobal, M. [Universita di Udine, Dipartimento di Chimica, Fisica e Ambiente, Udine (Italy); INFN, Gruppo Collegato di Udine, Udine (Italy); Corcella, G. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Degrassi, G. [Universita' Roma Tre, Dipartimento di Matematica e Fisica, Rome (Italy); INFN, Sezione di Roma Tre, Rome (Italy); Maltoni, F. [Universite Catholique de Louvain, Centre for Cosmology, Particle Physics and Phenomenology (CP3), Louvain-la-Neuve (Belgium); Montagna, G. [Universita di Pavia, Dipartimento di Fisica, Pavia (Italy); INFN, Sezione di Pavia, Pavia (Italy); Nason, P. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Oleari, C. [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano-Bicocca, Milan (Italy); Riva, F. [Ecole Polytechnique Federale de Lausanne, Institut de Theorie des Phenomenes Physiques, Lausanne (Switzerland)

    2015-11-15

    This review summarizes the results of the activities which have taken place in 2014 within the Standard Model Working Group of the ''What Next'' Workshop organized by INFN, Italy. We present a framework, general questions, and some indications of possible answers on the main issue for Standard Model physics in the LHC era and in view of possible future accelerators. (orig.)

  11. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Mathematical model of complex technical asymmetric system based on numerical-analytical boundary elements method

    Directory of Open Access Journals (Sweden)

    Dina V. Lazareva

    2015-06-01

    Full Text Available A new mathematical model of asymmetric support structure frame type is built on the basis of numerical-analytical boundary elements method (BEM. To describe the design scheme used is the graph theory. Building the model taken into account is the effect of frame members restrained torsion, which presence is due to the fact that these elements are thin-walled. The built model represents a real object as a two-axle semi-trailer platform. To implement the BEM algorithm obtained are analytical expressions of the fundamental functions and vector load components. The effected calculations are based on the semi-trailer two different models, using finite elements and boundary elements methods. The analysis showed that the error between the results obtained on the basis of two numerical methods and experimental data is about 4%, that indicates the adequacy of the proposed mathematical model.

  13. An analytical model of a clamped sandwich beam under low-impulse mass impact

    Directory of Open Access Journals (Sweden)

    Wen-zheng Jiang

    Full Text Available An analytical model is developed to examine a low impulsive projectile impact on a fully clamped sandwich beams by considering the coupled responses of the core and the face sheets. Firstly, based on the dynamic properties of foam cores, the sandwich beam is modeled as two rigid perfectly-plastic beams connected by rigid perfectly-plastic springs. Different from the previous sandwich beam model, the transverse compression and bending effects of the foam core are considered in the whole deformation process. Based on this model, different coupling mechanism of sandwich beams are constructed so that an analytical solution considering small deformation is derived. The coupled dynamic responses of sandwich beams with different core strengths are investigated. The results indicate that this model improves the prediction accuracy of the responses of the sandwich beams, and is available for the situation when the sandwich beam undergoes moderate global deformation.

  14. Simulation of reactive geochemical transport in groundwater using a semi-analytical screening model

    Science.gov (United States)

    McNab, Walt W.

    1997-10-01

    A reactive geochemical transport model, based on a semi-analytical solution to the advective-dispersive transport equation in two dimensions, is developed as a screening tool for evaluating the impact of reactive contaminants on aquifer hydrogeochemistry. Because the model utilizes an analytical solution to the transport equation, it is less computationally intensive than models based on numerical transport schemes, is faster, and it is not subject to numerical dispersion effects. Although the assumptions used to construct the model preclude consideration of reactions between the aqueous and solid phases, thermodynamic mineral saturation indices are calculated to provide qualitative insight into such reactions. Test problems involving acid mine drainage and hydrocarbon biodegradation signatures illustrate the utility of the model in simulating essential hydrogeochemical phenomena.

  15. Experimental validation of analytical models for a rapid determination of cycle parameters in thermoplastic injection molding

    Science.gov (United States)

    Pignon, Baptiste; Sobotka, Vincent; Boyard, Nicolas; Delaunay, Didier

    2017-10-01

    Two different analytical models were presented to determine cycle parameters of thermoplastics injection process. The aim of these models was to provide quickly a first set of data for mold temperature and cooling time. The first model is specific to amorphous polymers and the second one is dedicated to semi-crystalline polymers taking the crystallization into account. In both cases, the nature of the contact between the polymer and the mold could be considered as perfect or not (thermal contact resistance was considered). Results from models are compared with experimental data obtained with an instrumented mold for an acrylonitrile butadiene styrene (ABS) and a polypropylene (PP). Good agreements were obtained for mold temperature variation and for heat flux. In the case of the PP, the analytical crystallization times were compared with those given by a coupled model between heat transfer and crystallization kinetics.

  16. Analytical and numerical models of uranium ignition assisted by hydride formation

    International Nuclear Information System (INIS)

    Totemeier, T.C.; Hayes, S.L.

    1996-01-01

    Analytical and numerical models of uranium ignition assisted by the oxidation of uranium hydride are described. The models were developed to demonstrate that ignition of large uranium ingots could not occur as a result of possible hydride formation during storage. The thermodynamics-based analytical model predicted an overall 17 C temperature rise of the ingot due to hydride oxidation upon opening of the storage can in air. The numerical model predicted locally higher temperature increases at the surface; the transient temperature increase quickly dissipated. The numerical model was further used to determine conditions for which hydride oxidation does lead to ignition of uranium metal. Room temperature ignition only occurs for high hydride fractions in the nominally oxide reaction product and high specific surface areas of the uranium metal

  17. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    Science.gov (United States)

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Decision Making in Reference to Model of Marketing Predictive Analytics – Theory and Practice

    Directory of Open Access Journals (Sweden)

    Piotr Tarka

    2014-03-01

    Full Text Available Purpose: The objective of this paper is to describe concepts and assumptions of predictive marketing analytics in reference to decision making. In particular, we highlight issues pertaining to the importance of data and the modern approach to data analysis and processing with the purpose of solving real marketing problems that companies encounter in business. Methodology: In this paper authors provide two study cases showing how, and to what extent predictive marketing analytics work can be useful in practice e.g., investigation of the marketing environment. The two cases are based on organizations operating mainly on Web site domain. The fi rst part of this article, begins a discussion with the explanation of a general idea of predictive marketing analytics. The second part runs through opportunities it creates for companies in the process of building strong competitive advantage in the market. The paper article ends with a brief comparison of predictive analytics versus traditional marketing-mix analysis. Findings: Analytics play an extremely important role in the current process of business management based on planning, organizing, implementing and controlling marketing activities. Predictive analytics provides the actual and current picture of the external environment. They also explain what problems are faced with the company in business activities. Analytics tailor marketing solutions to the right time and place at minimum costs. In fact they control the effi ciency and simultaneously increases the effectiveness of the firm. Practical implications: Based on the study cases comparing two enterprises carrying business activities in different areas, one can say that predictive analytics has far more been embraces extensively than classical marketing-mix analyses. The predictive approach yields greater speed of data collection and analysis, stronger predictive accuracy, better obtained competitor data, and more transparent models where one can

  19. The Use of Decision-Analytic Models in Atopic Eczema: A Systematic Review and Critical Appraisal.

    Science.gov (United States)

    McManus, Emma; Sach, Tracey; Levell, Nick

    2018-01-01

    The objective of this systematic review was to identify and assess the quality of published economic decision-analytic models within atopic eczema against best practice guidelines, with the intention of informing future decision-analytic models within this condition. A systematic search of the following online databases was performed: MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, Cochrane Database of Systematic Reviews, NHS Economic Evaluation Database, EconLit, Scopus, Health Technology Assessment, Cost-Effectiveness Analysis Registry and Web of Science. Papers were eligible for inclusion if they described a decision-analytic model evaluating both the costs and benefits associated with an intervention or prevention for atopic eczema. Data were extracted using a standardised form by two independent reviewers, whilst quality was assessed using the model-specific Philips criteria. Twenty-four models were identified, evaluating either preventions (n = 12) or interventions (n = 12): 14 reported using a Markov modelling approach, four utilised decision trees and one a discrete event simulation, whilst five did not specify the approach. The majority, 22 studies, reported that the intervention was dominant or cost effective, given the assumptions and analytical perspective taken. Notably, the models tended to be short-term (16 used a time horizon of ≤1 year), often providing little justification for the limited time horizon chosen. The methodological and reporting quality of the studies was generally weak, with only seven studies fulfilling more than 50% of their applicable Philips criteria. This is the first systematic review of decision models in eczema. Whilst the majority of models reported favourable outcomes in terms of the cost effectiveness of the new intervention, the usefulness of these findings for decision-making is

  20. New integrable models and analytical solutions in f (R ) cosmology with an ideal gas

    Science.gov (United States)

    Papagiannopoulos, G.; Basilakos, Spyros; Barrow, John D.; Paliathanasis, Andronikos

    2018-01-01

    In the context of f (R ) gravity with a spatially flat FLRW metric containing an ideal fluid, we use the method of invariant transformations to specify families of models which are integrable. We find three families of f (R ) theories for which new analytical solutions are given and closed-form solutions are provided.