WorldWideScience

Sample records for physical modeling techniques

  1. Testing Model with "Check Technique" for Physics Education

    Science.gov (United States)

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  2. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  3. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  4. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The

  5. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  6. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  7. Techniques in polarization physics

    International Nuclear Information System (INIS)

    Clausnitzer, G.

    1974-01-01

    A review of the current status of the technical tools necessary to perform different kinds of polarization experiments is presented, and the absolute and relative accuracy with which data can be obtained is discussed. A description of polarized targets and sources of polarized fast neutrons is included. Applications of polarization techniques to other fields is mentioned briefly. (14 figures, 3 tables, 110 references) (U.S.)

  8. Food physics and radiation techniques

    International Nuclear Information System (INIS)

    Szabo, A. S.

    1999-01-01

    In the lecture information is given about food physics, which is a rather new, interdisciplinary field of science, connecting food science and applied physics. The topics of radioactivity of foodstuffs and radiation techniques in the food industry are important parts of food physics detailed information will be given about the main fields (e.g. radio stimulation, food preservation) of radiation techniques in the agro-food sector. Finally some special questions of radioactive contamination of foodstuffs in hungary and applicability of radioanalytical techniques (e.g. Inaa) for food investigation will be analyzed and discussed

  9. Physics aids new medical techniques

    CERN Document Server

    CERN. Geneva

    2001-01-01

    Since the discovery of X-rays, fundamental physics has been a source of ideas for radiography and medical imaging. A new imaging method firmly rooted in particle physics was chosen by Time magazine as one of its "Inventions of the Year 2000". The award-winning invention in the medical science category was a scanner that combined the advantages of computer tomography with positron emission tomography. The use of these techniques, which depend on detecting and analysing electromagnetic radiation (X-rays or gamma rays respectively), show that detection techniques from particle physics have made, and continue to make, essential contributions to medical science. (0 refs).

  10. Linearized Flux Evolution (LiFE): A technique for rapidly adapting fluxes from full-physics radiative transfer models

    Science.gov (United States)

    Robinson, Tyler D.; Crisp, David

    2018-05-01

    Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.

  11. Tsunami Simulators in Physical Modelling Laboratories - From Concept to Proven Technique

    Science.gov (United States)

    Allsop, W.; Chandler, I.; Rossetto, T.; McGovern, D.; Petrone, C.; Robinson, D.

    2016-12-01

    Before 2004, there was little public awareness around Indian Ocean coasts of the potential size and effects of tsunami. Even in 2011, the scale and extent of devastation by the Japan East Coast Tsunami was unexpected. There were very few engineering tools to assess onshore impacts of tsunami, so no agreement on robust methods to predict forces on coastal defences, buildings or related infrastructure. Modelling generally used substantial simplifications of either solitary waves (far too short durations) or dam break (unrealistic and/or uncontrolled wave forms).This presentation will describe research from EPI-centre, HYDRALAB IV, URBANWAVES and CRUST projects over the last 10 years that have developed and refined pneumatic Tsunami Simulators for the hydraulic laboratory. These unique devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example defences. They have reproduced full-duration tsunamis including the Mercator trace from 2004 at 1:50 scale. Engineering scale models subjected to those tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences and pressures / forces on buildings. This presentation will describe how these pneumatic Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facility within which they operate, and will highlight research results from the three generations of Tsunami Simulator. Of direct relevance to engineers and modellers will be measurements of wave run-up levels and comparison with theoretical predictions. Recent measurements of forces on individual buildings have been generalized by separate experiments on buildings (up to 4 rows) which show that the greatest forces can act on the landward (not seaward) buildings. Continuing research in the 70m long 4m wide Fast Flow Facility on tsunami defence structures have also measured forces on buildings in the lee of a failed defence wall.

  12. Compensation Techniques in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Sayed, Hisham Kamal [Old Dominion Univ., Norfolk, VA (United States)

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Two problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.

  13. Predictors for physical activity in adolescent girls using statistical shrinkage techniques for hierarchical longitudinal mixed effects models.

    Directory of Open Access Journals (Sweden)

    Edward M Grant

    Full Text Available We examined associations among longitudinal, multilevel variables and girls' physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland contributed participants from 8th (2009 to 11th grade (2011 (n=561. Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables; height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school's physical activity environment, affect adolescent girl's moderate to vigorous physical activity longitudinally.

  14. Techniques for determining physical zones of influence

    Science.gov (United States)

    Hamann, Hendrik F; Lopez-Marrero, Vanessa

    2013-11-26

    Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.

  15. Experimental techniques in nuclear and particle physics

    CERN Document Server

    Tavernier, Stefaan

    2009-01-01

    The book is based on a course in nuclear and particle physics that the author has taught over many years to physics students, students in nuclear engineering and students in biomedical engineering. It provides the basic understanding that any student or researcher using such instruments and techniques should have about the subject. After an introduction to the structure of matter at the subatomic scale, it covers the experimental aspects of nuclear and particle physics. Ideally complementing a theoretically-oriented textbook on nuclear physics and/or particle physics, it introduces the reader to the different techniques used in nuclear and particle physics to accelerate particles and to measurement techniques (detectors) in nuclear and particle physics. The main subjects treated are: interactions of subatomic particles in matter; particle accelerators; basics of different types of detectors; and nuclear electronics. The book will be of interest to undergraduates, graduates and researchers in both particle and...

  16. Physical optimization of afterloading techniques

    International Nuclear Information System (INIS)

    Anderson, L.L.

    1985-01-01

    Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de

  17. Experimental techniques in nuclear and particle physics

    International Nuclear Information System (INIS)

    Tavernier, Stefaan

    2010-01-01

    The book is based on a course in nuclear and particle physics that the author has taught over many years to physics students, students in nuclear engineering and students in biomedical engineering. It provides the basic understanding that any student or researcher using such instruments and techniques should have about the subject. After an introduction to the structure of matter at the subatomic scale, it covers the experimental aspects of nuclear and particle physics. Ideally complementing a theoretically-oriented textbook on nuclear physics and/or particle physics, it introduces the reader to the different techniques used in nuclear and particle physics to accelerate particles and to measurement techniques (detectors) in nuclear and particle physics. The main subjects treated are: interactions of subatomic particles in matter; particle accelerators; basics of different types of detectors; and nuclear electronics. The book will be of interest to undergraduates, graduates and researchers in both particle and nuclear physics. For the physicists it is a good introduction to all experimental aspects of nuclear and particle physics. Nuclear engineers will appreciate the nuclear measurement techniques, while biomedical engineers can learn about measuring ionising radiation, the use of accelerators for radiotherapy. What's more, worked examples, end-of-chapter exercises, and appendices with key constants, properties and relationships supplement the textual material. (orig.)

  18. New informative techniques in high energy physics

    International Nuclear Information System (INIS)

    Klimenko, S.V.; Ukhov, V.I.

    1992-01-01

    A number of new informative techniques applied to high energy physics are considered. These are the object-oriented programming, systems integration, UIMS, visualisation, expert systems, neural networks. 100 refs

  19. Physical Modeling Modular Boxes: PHOXES

    DEFF Research Database (Denmark)

    Gelineck, Steven; Serafin, Stefania

    2010-01-01

    This paper presents the development of a set of musical instruments, which are based on known physical modeling sound synthesis techniques. The instruments are modular, meaning that they can be combined in various ways. This makes it possible to experiment with physical interaction and sonic...

  20. Physical modeling of rock

    International Nuclear Information System (INIS)

    Cheney, J.A.

    1981-01-01

    The problems of statisfying similarity between a physical model and the prototype in rock wherein fissures and cracks place a role in physical behavior is explored. The need for models of large physical dimensions is explained but also testing of models of the same prototype over a wide range of scales is needed to ascertain the influence of lack of similitude of particular parameters between prototype and model. A large capacity centrifuge would be useful in that respect

  1. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Becks, Karl-Heinz; Perret-Gallix, Denis

    1994-01-01

    New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking

  2. Identification of physical models

    DEFF Research Database (Denmark)

    Melgaard, Henrik

    1994-01-01

    of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components......The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...

  3. Wave Generation in Physical Models

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    The present book describes the most important aspects of wave generation techniques in physical models. Moreover, the book serves as technical documentation for the wave generation software AwaSys 6, cf. Aalborg University (2012). In addition to the two main authors also Tue Hald and Michael...

  4. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  5. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  6. Data Analysis Techniques for Physical Scientists

    Science.gov (United States)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  7. Physical protection philosophy and techniques in Sweden

    International Nuclear Information System (INIS)

    Dufva, B.

    1988-01-01

    The circumstances for the protection of nuclear power plants are special in Sweden. A very important factor is that armed guards at the facilities are alien to the Swedish society. They do not use them. The Swedish concept of physical protection accepts that the aggressor will get into the facility. With this in mind, the Swedish Nuclear Power Inspectorate (SKI) has established the policy that administrative, technical, and organizational measures will be directed toward preventing an aggressor from damaging the reactor, even if he has occupied the facility. In addition, the best conditions possible shall be established for the operator and the police to reoccupy the plant. The author believes this policy is different from that of many other countries. Therefore, he focusses on the Swedish philosophy and techniques for the physical protection of nuclear power plants

  8. Models in physics teaching

    DEFF Research Database (Denmark)

    Kneubil, Fabiana Botelho

    2016-01-01

    In this work we show an approach based on models, for an usual subject in an introductory physics course, in order to foster discussions on the nature of physical knowledge. The introduction of elements of the nature of knowledge in physics lessons has been emphasised by many educators and one uses...... the case of metals to show the theoretical and phenomenological dimensions of physics. The discussion is made by means of four questions whose answers cannot be reached neither for theoretical elements nor experimental measurements. Between these two dimensions it is necessary to realise a series...... of reasoning steps to deepen the comprehension of microscopic concepts, such as electrical resistivity, drift velocity and free electrons. When this approach is highlighted, beyond the physical content, aspects of its nature become explicit and may improve the structuring of knowledge for learners...

  9. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  10. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  11. Beyond Standard Model Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bellantoni, L.

    2009-11-01

    There are many recent results from searches for fundamental new physics using the TeVatron, the SLAC b-factory and HERA. This talk quickly reviewed searches for pair-produced stop, for gauge-mediated SUSY breaking, for Higgs bosons in the MSSM and NMSSM models, for leptoquarks, and v-hadrons. There is a SUSY model which accommodates the recent astrophysical experimental results that suggest that dark matter annihilation is occurring in the center of our galaxy, and a relevant experimental result. Finally, model-independent searches at D0, CDF, and H1 are discussed.

  12. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  13. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  14. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Perret-Gallix, D.; Wojcik, W.

    1990-01-01

    These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes

  15. "Statistical Techniques for Particle Physics" (2/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  16. "Statistical Techniques for Particle Physics" (1/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  17. "Statistical Techniques for Particle Physics" (4/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  18. "Statistical Techniques for Particle Physics" (3/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  19. Models and structures: mathematical physics

    International Nuclear Information System (INIS)

    2003-01-01

    This document gathers research activities along 5 main directions. 1) Quantum chaos and dynamical systems. Recent results concern the extension of the exact WKB method that has led to a host of new results on the spectrum and wave functions. Progress have also been made in the description of the wave functions of chaotic quantum systems. Renormalization has been applied to the analysis of dynamical systems. 2) Combinatorial statistical physics. We see the emergence of new techniques applied to various such combinatorial problems, from random walks to random lattices. 3) Integrability: from structures to applications. Techniques of conformal field theory and integrable model systems have been developed. Progress is still made in particular for open systems with boundary conditions, in connection to strings and branes physics. Noticeable links between integrability and exact WKB quantization to 2-dimensional disordered systems have been highlighted. New correlations of eigenvalues and better connections to integrability have been formulated for random matrices. 4) Gravities and string theories. We have developed aspects of 2-dimensional string theory with a particular emphasis on its connection to matrix models as well as non-perturbative properties of M-theory. We have also followed an alternative path known as loop quantum gravity. 5) Quantum field theory. The results obtained lately concern its foundations, in flat or curved spaces, but also applications to second-order phase transitions in statistical systems

  20. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  1. Whole-body irradiation technique: physical aspects

    International Nuclear Information System (INIS)

    Venencia, D.; Bustos, S.; Zunino, S.

    1998-01-01

    The objective of this work has been to implement a Total body irradiation technique that fulfill the following conditions: simplicity, repeatability, fast and comfortable positioning for the patient, homogeneity of the dose between 10-15 %, short times of treatments and In vivo dosimetric verifications. (Author)

  2. Ontology modeling in physical asset integrity management

    CERN Document Server

    Yacout, Soumaya

    2015-01-01

    This book presents cutting-edge applications of, and up-to-date research on, ontology engineering techniques in the physical asset integrity domain. Though a survey of state-of-the-art theory and methods on ontology engineering, the authors emphasize essential topics including data integration modeling, knowledge representation, and semantic interpretation. The book also reflects novel topics dealing with the advanced problems of physical asset integrity applications such as heterogeneity, data inconsistency, and interoperability existing in design and utilization. With a distinctive focus on applications relevant in heavy industry, Ontology Modeling in Physical Asset Integrity Management is ideal for practicing industrial and mechanical engineers working in the field, as well as researchers and graduate concerned with ontology engineering in physical systems life cycles. This book also: Introduces practicing engineers, research scientists, and graduate students to ontology engineering as a modeling techniqu...

  3. Evaluation of conformal radiotherapy techniques through physics and biologic criteria

    International Nuclear Information System (INIS)

    Bloch, Jonatas Carrero

    2012-01-01

    In the fight against cancer, different irradiation techniques have been developed based on technological advances and aiming to optimize the elimination of tumor cells with the lowest damage to healthy tissues. The radiotherapy planning goal is to establish irradiation technical parameters in order to achieve the prescribed dose distribution over the treatment volumes. While dose prescription is based on radiosensitivity of the irradiated tissues, the physical calculations on treatment planning take into account dosimetric parameters related to the radiation beam and the physical characteristics of the irradiated tissues. To incorporate tissue's radiosensitivity into radiotherapy planning calculations can help particularize treatments and establish criteria to compare and elect radiation techniques, contributing to the tumor control and the success of the treatment. Accordingly, biological models of cellular response to radiation have to be well established. This work aimed to study the applicability of using biological models in radiotherapy planning calculations to aid evaluating radiotherapy techniques. Tumor control probability (TCP) was studied for two formulations of the linear-quadratic model, with and without repopulation, as a function of planning parameters, as dose per fraction, and of radiobiological parameters, as the α/β ratio. Besides, the usage of biological criteria to compare radiotherapy techniques was tested using a prostate planning simulated with Monte Carlo code PENELOPE. Afterwards, prostate planning for five patients from the Hospital das Clinicas da Faculdade de Medicina de Ribeirao Preto, USP, using three different techniques were compared using the tumor control probability. In that order, dose matrices from the XiO treatment planning system were converted to TCP distributions and TCP-volume histograms. The studies performed allow the conclusions that radiobiological parameters can significantly influence tumor control

  4. Contemporary machine learning: techniques for practitioners in the physical sciences

    Science.gov (United States)

    Spears, Brian

    2017-10-01

    Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  6. Surveillance as a Technique of Power in Physical Education

    Science.gov (United States)

    Webb, Louisa; McCaughtry, Nate; MacDonald, Doune

    2004-01-01

    This paper analyses surveillance as a technique of power in the culture of physical education, including its impact upon the health of teachers. Additionally, gendered aspects of surveillance are investigated because physical education is an important location in and through which bodies are inscribed with gendered identities. The embodied nature…

  7. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  8. Standard Model physics

    CERN Multimedia

    Altarelli, Guido

    1999-01-01

    Introduction structure of gauge theories. The QEDand QCD examples. Chiral theories. The electroweak theory. Spontaneous symmetry breaking. The Higgs mechanism Gauge boson and fermion masses. Yukawa coupling. Charges current couplings. The Cabibo-Kobayashi-Maskawa matrix and CP violation. Neutral current couplings. The Glasow-Iliopoulos-Maiani mechanism. Gauge boson and Higgs coupling. Radiative corrections and loops. Cancellation of the chiral anomaly. Limits on the Higgs comparaison. Problems of the Standard Model. Outlook.

  9. Quasi standard model physics

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1986-01-01

    Possible small extensions of the standard model are considered, which are motivated by the strong CP problem and by the baryon asymmetry of the Universe. Phenomenological arguments are given which suggest that imposing a PQ symmetry to solve the strong CP problem is only tenable if the scale of the PQ breakdown is much above M W . Furthermore, an attempt is made to connect the scale of the PQ breakdown to that of the breakdown of lepton number. It is argued that in these theories the same intermediate scale may be responsible for the baryon number of the Universe, provided the Kuzmin Rubakov Shaposhnikov (B+L) erasing mechanism is operative. (orig.)

  10. Physics of nuclear radiations concepts, techniques and applications

    CERN Document Server

    Rangacharyulu, Chary

    2013-01-01

    Physics of Nuclear Radiations: Concepts, Techniques and Applications makes the physics of nuclear radiations accessible to students with a basic background in physics and mathematics. Rather than convince students one way or the other about the hazards of nuclear radiations, the text empowers them with tools to calculate and assess nuclear radiations and their impact. It discusses the meaning behind mathematical formulae as well as the areas in which the equations can be applied. After reviewing the physics preliminaries, the author addresses the growth and decay of nuclear radiations, the stability of nuclei or particles against radioactive transformations, and the behavior of heavy charged particles, electrons, photons, and neutrons. He then presents the nomenclature and physics reasoning of dosimetry, covers typical nuclear facilities (such as medical x-ray machines and particle accelerators), and describes the physics principles of diverse detectors. The book also discusses methods for measuring energy a...

  11. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  12. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  13. Techniques for data compression in experimental nuclear physics problems

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.

    1984-01-01

    Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one

  14. Physical model of Nernst element

    International Nuclear Information System (INIS)

    Nakamura, Hiroaki; Ikeda, Kazuaki; Yamaguchi, Satarou

    1998-08-01

    Generation of electric power by the Nernst effect is a new application of a semiconductor. A key point of this proposal is to find materials with a high thermomagnetic figure-of-merit, which are called Nernst elements. In order to find candidates of the Nernst element, a physical model to describe its transport phenomena is needed. As the first model, we began with a parabolic two-band model in classical statistics. According to this model, we selected InSb as candidates of the Nernst element and measured their transport coefficients in magnetic fields up to 4 Tesla within a temperature region from 270 K to 330 K. In this region, we calculated transport coefficients numerically by our physical model. For InSb, experimental data are coincident with theoretical values in strong magnetic field. (author)

  15. A new techniques in the physics of diagnostic radiology

    International Nuclear Information System (INIS)

    Jennings, R.J.

    1987-01-01

    The basic physics involved in the generation of X-rays and in the energy dependence of their interaction with matter are reviewed. Some applications of those ideas in both conventional X-ray imaging and in new imaging techniques are studied. Methods for the optimization of X-ray diagnostic imaging system are discussed. (M.A.C.) [pt

  16. Learning Physics through Project-Based Learning Game Techniques

    Science.gov (United States)

    Baran, Medine; Maskan, Abdulkadir; Yasar, Seyma

    2018-01-01

    The aim of the present study, in which Project and game techniques are used together, is to examine the impact of project-based learning games on students' physics achievement. Participants of the study consist of 34 9th grade students (N = 34). The data were collected using achievement tests and a questionnaire. Throughout the applications, the…

  17. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  18. Instream Physical Habitat Modelling Types

    DEFF Research Database (Denmark)

    Conallin, John; Boegh, Eva; Krogsgaard, Jørgen

    2010-01-01

    The introduction of the EU Water Framework Directive (WFD) is providing member state water resource managers with significant challenges in relation to meeting the deadline for 'Good Ecological Status' by 2015. Overall, instream physical habitat modelling approaches have advantages and disadvanta......The introduction of the EU Water Framework Directive (WFD) is providing member state water resource managers with significant challenges in relation to meeting the deadline for 'Good Ecological Status' by 2015. Overall, instream physical habitat modelling approaches have advantages...... suit their situations. This paper analyses the potential of different methods available for water managers to assess hydrological and geomorphological impacts on the habitats of stream biota, as requested by the WFD. The review considers both conventional and new advanced research-based instream...... physical habitat models. In parametric and non-parametric regression models, model assumptions are often not satisfied and the models are difficult to transfer to other regions. Research-based methods such as the artificial neural networks and individual-based modelling have promising potential as water...

  19. Accelerator physics and modeling: Proceedings

    International Nuclear Information System (INIS)

    Parsa, Z.

    1991-01-01

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings

  20. Development of the physical model

    International Nuclear Information System (INIS)

    Liu Zunqi; Morsy, Samir

    2001-01-01

    Full text: The Physical Model was developed during Program 93+2 as a technical tool to aid enhanced information analysis and now is an integrated part of the Department's on-going State evaluation process. This paper will describe the concept of the Physical Model, including its objectives, overall structure and the development of indicators with designated strengths, followed by a brief description of using the Physical Model in implementing the enhanced information analysis. The work plan for expansion and update of the Physical Model is also presented at the end of the paper. The development of the Physical Model is an attempt to identify, describe and characterize every known process for carrying out each step necessary for the acquisition of weapons-usable material, i.e., all plausible acquisition paths for highly enriched uranium (HEU) and separated plutonium (Pu). The overall structure of the Physical Model has a multilevel arrangement. It includes at the top level all the main steps (technologies) that may be involved in the nuclear fuel cycle from the source material production up to the acquisition of weapons-usable material, and then beyond the civilian fuel cycle to the development of nuclear explosive devices (weaponization). Each step is logically interconnected with the preceding and/or succeeding steps by nuclear material flows. It contains at its lower levels every known process that is associated with the fuel cycle activities presented at the top level. For example, uranium enrichment is broken down into three branches at the second level, i.e., enrichment of UF 6 , UCl 4 and U-metal respectively; and then further broken down at the third level into nine processes: gaseous diffusion, gas centrifuge, aerodynamic, electromagnetic, molecular laser (MLIS), atomic vapor laser (AVLIS), chemical exchange, ion exchange and plasma. Narratives are presented at each level, beginning with a general process description then proceeding with detailed

  1. Many Body Methods from Chemistry to Physics: Novel Computational Techniques for Materials-Specific Modelling: A Computational Materials Science and Chemistry Network

    Energy Technology Data Exchange (ETDEWEB)

    Millis, Andrew [Columbia Univ., New York, NY (United States). Dept. of Physics

    2016-11-17

    Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomes included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.

  2. Physical models of cell motility

    CERN Document Server

    2016-01-01

    This book surveys the most recent advances in physics-inspired cell movement models. This synergetic, cross-disciplinary effort to increase the fidelity of computational algorithms will lead to a better understanding of the complex biomechanics of cell movement, and stimulate progress in research on related active matter systems, from suspensions of bacteria and synthetic swimmers to cell tissues and cytoskeleton.Cell motility and collective motion are among the most important themes in biology and statistical physics of out-of-equilibrium systems, and crucial for morphogenesis, wound healing, and immune response in eukaryotic organisms. It is also relevant for the development of effective treatment strategies for diseases such as cancer, and for the design of bioactive surfaces for cell sorting and manipulation. Substrate-based cell motility is, however, a very complex process as regulatory pathways and physical force generation mechanisms are intertwined. To understand the interplay between adhesion, force ...

  3. Physical model of reactor pulse

    International Nuclear Information System (INIS)

    Petrovic, A.; Ravnik, M.

    2004-01-01

    Pulse experiments have been performed at J. Stefan Institute TRIGA reactor since 1991. In total, more than 130 pulses have been performed. Extensive experimental information on the pulse physical characteristics has been accumulated. Fuchs-Hansen adiabatic model has been used for predicting and analysing the pulse parameters. The model is based on point kinetics equation, neglecting the delayed neutrons and assuming constant inserted reactivity in form of step function. Deficiencies of the Fuchs-Hansen model and systematic experimental errors have been observed and analysed. Recently, the pulse model was improved by including the delayed neutrons and time dependence of inserted reactivity. The results explain the observed non-linearity of the pulse energy for high pulses due to finite time of pulse rod withdrawal and the contribution of the delayed neutrons after the prompt part of the pulse. The results of the improved model are in good agreement with experimental results. (author)

  4. Protein Folding: Search for Basic Physical Models

    Directory of Open Access Journals (Sweden)

    Ivan Y. Torshin

    2003-01-01

    Full Text Available How a unique three-dimensional structure is rapidly formed from the linear sequence of a polypeptide is one of the important questions in contemporary science. Apart from biological context of in vivo protein folding (which has been studied only for a few proteins, the roles of the fundamental physical forces in the in vitro folding remain largely unstudied. Despite a degree of success in using descriptions based on statistical and/or thermodynamic approaches, few of the current models explicitly include more basic physical forces (such as electrostatics and Van Der Waals forces. Moreover, the present-day models rarely take into account that the protein folding is, essentially, a rapid process that produces a highly specific architecture. This review considers several physical models that may provide more direct links between sequence and tertiary structure in terms of the physical forces. In particular, elaboration of such simple models is likely to produce extremely effective computational techniques with value for modern genomics.

  5. Behavior change techniques in top-ranked mobile apps for physical activity.

    Science.gov (United States)

    Conroy, David E; Yang, Chih-Hsiang; Maher, Jaclyn P

    2014-06-01

    Mobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps. The aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity. Top-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013. Most descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques. Behavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  6. Service Learning In Physics: The Consultant Model

    Science.gov (United States)

    Guerra, David

    2005-04-01

    Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.

  7. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  8. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  9. Cabin Environment Physics Risk Model

    Science.gov (United States)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  10. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  11. Physical and measuring principles of nuclear well logging techniques

    International Nuclear Information System (INIS)

    Loetzsch, U.; Winkler, R.

    1981-01-01

    Proceeding from the general task of nuclear geophysics as a special discipline of applied geophyscis, the essential physical problems of nuclear well logging techniques are considered. Particularly, the quantitative relationship between measured values and interesting geologic parameters to be determined are discussed taking into account internal and external perturbation parameters. Resulting from this study, the technological requirements for radiation sources and their shielding, for detectors, electronic circuits in logging tools, signal transmission by cable and recording equipment are derived, and explained on the basis of examples of gamma-gamma and neutron-neutron logging. (author)

  12. Advanced detection techniques for educational experiments in cosmic ray physics

    International Nuclear Information System (INIS)

    Aiola, Salvatore; La-Rocca, Paola; Riggi, Francesco; Riggi, Simone

    2013-06-01

    In this paper we describe several detection techniques that can be employed to study cosmic ray properties and carry out training activities at high school and undergraduate level. Some of the proposed devices and instrumentation are inherited from professional research experiments, while others were especially developed and marketed for educational cosmic ray experiments. The educational impact of experiments in cosmic ray physics in high-school or undergraduate curricula will be exploited through various examples, going from simple experiments carried out with small Geiger counters or scintillation devices to more advanced detection instrumentation which can offer starting points for not trivial research work. (authors)

  13. Excellence in Physics Education Award: Modeling Theory for Physics Instruction

    Science.gov (United States)

    Hestenes, David

    2014-03-01

    All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.

  14. A Multivariate Model of Physics Problem Solving

    Science.gov (United States)

    Taasoobshirazi, Gita; Farley, John

    2013-01-01

    A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…

  15. Techniques for nuclear and particle physics experiments. 2. rev. ed.

    International Nuclear Information System (INIS)

    Leo, W.R.

    1992-01-01

    This book is an outgrowth of an advanced laboratory course in experimental nuclear and particle physics the author gave to physics majors at the University of Geneva during the years 1978- 1983. The course was offered to third and fourth year students, the latter of which had, at this point in their studies, chosen to specialize in experimental nuclear or particle physics. This implied that they would go on to do a 'diplome' thesis with one of the high- or intermediate-energy research groups in the physics department. The format of the course was such that the students were required to concentrate on only one experiment during the trimester, rather than perform a series of experiments as is more typical of a traditional course of this type. Their tasks thus included planning the experiment, learning the relevant techniques, setting up and troubleshooting the measuring apparatus, calibration, data-taking and analysis, as well as responsibility for maintaining their equipment, i.e., tasks resembling those in a real experiment. This more intensive involvement provided the students with a better understanding of the experimental problems encountered in a professional experiment and helped instill a certain independence and confidence which would prepare them for entry into a research group in the department. Teaching assistants were presented to help the students during the trimester and a series of weekly lectures was also given on various topics in experimental nuclear and particle physics. This included general information on detectors, nuclear electronics, statistics, the interaction of radiation in matter, etc., and a good deal of practical information for actually doing experiments. (orig.) With 254 figs

  16. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  17. Multiphysics software and the challenge to validating physical models

    International Nuclear Information System (INIS)

    Luxat, J.C.

    2008-01-01

    This paper discusses multi physics software and validation of physical models in the nuclear industry. The major challenge is to convert the general purpose software package to a robust application-specific solution. This requires greater knowledge of the underlying solution techniques and the limitations of the packages. Good user interfaces and neat graphics do not compensate for any deficiencies

  18. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  19. Application of physical and chemical characterization techniques to metallic powders

    International Nuclear Information System (INIS)

    Slotwinski, J. A.; Watson, S. S.; Stutzman, P. E.; Ferraris, C. F.; Peltz, M. A.; Garboczi, E. J.

    2014-01-01

    Systematic studies have been carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to chemistry, including X-ray diffraction and energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, were also employed. Results of these analyses will be used to shed light on the question: how does virgin powder change after being exposed to and recycled from one or more additive manufacturing build cycles? In addition, these findings can give insight into the actual additive manufacturing process

  20. Different Applications of Rheological Techniques in Studies of Physical Gels

    DEFF Research Database (Denmark)

    Hvidt, Søren

    of associating protein filaments with the characteristic function of individual filaments. The proteins enable the cell to regulate the mechanical properties of the cell by sol-gel transition and a variety of crosslinking reactions. In the food industry texture of products are regulated by addition of gel......, respectively, are particularly useful for investigating slow motions in gels and long-time properties. An example of how these different techniques have been used to investigate the rheological properties of sputum [4] will be discussed. The results demonstrate that sputum is a viscoelastic material...... are dominated by repulsive interactions between micelles, and oscillatory measurements allow a determination of the repulsive potential between micelles. Oscillatory bulk modulus measurements have been used to determine the dynamics of unimer-micelle motions. The strain properties of physical gels are of major...

  1. Model-implementation fidelity in cyber physical system design

    CERN Document Server

    Fabre, Christian

    2017-01-01

    This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques. Addresses CPS design problems such as cross-application interference, parsimonious modeling, and trustful code production Describes solutions, such as simulation for extra-functional properties, extension of cod...

  2. Physical quantities, their role and treatment in gasflow measurement techniques

    International Nuclear Information System (INIS)

    Narjes, L.

    1977-06-01

    We begin by taking a closer look at the concepts physical quantity, dimension and unit of measurement. Then a survey is given of the physical quantities applied in gasflow measurement techniques. Here the volume-, as well as the mass-flow rate, as derived quantities are of particular interest. The application of these quantities in relation to the legal units of measurement is specifically described. In addition the quantity equation and further the quantity equation adapted to the use of suitable units and their modes of application are compared. In the appendix four examples clarify these modes. Special attention is paid to the quantity equation adapted to practically oriented units. The applications of this type of equation in VDI regulations, standards and other technical guidelines for measurement of flow are mentioned. Moreover, the meaning of the standard state for the comparison of flows of gaseous fluids is illustrated. The difficulties concerning an international agreement on uniform standard temperature are explained. Starting from there, the advantages of the fundamental quantity 'amount of substance' applied to the measurement of flow are described. The use of this quantity for the thermodynamic state of ideal and real gases, respectively gas mixtures, is demonstrated in the appendix by an example. (orig.) [de

  3. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  4. Icarus: A Novel detection technique for underground physics

    International Nuclear Information System (INIS)

    Bueno, A.

    2004-01-01

    The ICARUS experiment is a liquid Argon TPC with imaging capabilities, able to produce high granularity 3D reconstruction of recorded events as well as high precision measurements over large sensitive volumes. A full of a 600 ton detector was carried out, at shallow depth, in the year 2001 in Pavia, Italy. The successful operation of this device and the ongoing data analysis have shown that the liquid Argon technology is now mature and suitable for the construction of very massive detectors. We review some topics of a broad Physics program looking for rare phenomena beyond the Standard Model. (Author) 7 refs

  5. High-energy physics software parallelization using database techniques

    International Nuclear Information System (INIS)

    Argante, E.; Van der Stok, P.D.V.; Willers, I.

    1997-01-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradigm, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI. (orig.)

  6. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  7. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  8. Experimental and Computational Techniques in Soft Condensed Matter Physics

    Science.gov (United States)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  9. Literature Review of Dredging Physical Models

    Science.gov (United States)

    This U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, special report presents a review of dredging physical ...model studies with the goal of understanding the most current state of dredging physical modeling, understanding conditions of similitude used in past...studies, and determining whether the flow field around a dredging operation has been quantified. Historical physical modeling efforts have focused on

  10. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  11. Statistical classification techniques in high energy physics (SDDT algorithm)

    International Nuclear Information System (INIS)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2016-01-01

    We present our proposal of the supervised binary divergence decision tree with nested separation method based on the generalized linear models. A key insight we provide is the clustering driven only by a few selected physical variables. The proper selection consists of the variables achieving the maximal divergence measure between two different classes. Further, we apply our method to Monte Carlo simulations of physics processes corresponding to a data sample of top quark-antiquark pair candidate events in the lepton+jets decay channel. The data sample is produced in pp̅ collisions at √S = 1.96 TeV. It corresponds to an integrated luminosity of 9.7 fb"-"1 recorded with the D0 detector during Run II of the Fermilab Tevatron Collider. The efficiency of our algorithm achieves 90% AUC in separating signal from background. We also briefly deal with the modification of statistical tests applicable to weighted data sets in order to test homogeneity of the Monte Carlo simulations and measured data. The justification of these modified tests is proposed through the divergence tests. (paper)

  12. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  13. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  14. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  15. Design and performance evaluations of generic programming techniques in a R and D prototype of Geant4 physics

    Energy Technology Data Exchange (ETDEWEB)

    Pia, M G; Saracco, P; Sudhakar, M [INFN Sezione di Genova, Via Dodecaneso 33, 16146 Genova (Italy); Zoglauer, A [University of California at Berkeley, Berkeley, CA 94720-7450 (United States); Augelli, M [CNES, 18 Av. Edouard Belin, 31401 Toulouse (France); Gargioni, E [University Medical Center Hamburg-Eppendorf, D-20246 Hamburg (Germany); Kim, C H [Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul, 133-791 (Korea, Republic of); Quintieri, L [INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, I-00044 Frascati (Italy); Filho, P P de Queiroz; Santos, D de Souza [IRD, Av. Salvador Allende, s/n. 22780-160, Rio de Janeiro, RJ (Brazil); Weidenspointner, G [MPI fuer extraterrestrische Physik Postfach 1603, D-85740 Garching (Germany); Begalli, M, E-mail: mariagrazia.pia@ge.infn.i [UERJ, R. Sao Francisco Xavier, 524. 20550-013, Rio de Janeiro, RJ (Brazil)

    2010-04-01

    A R and D project has been recently launched to investigate Geant4 architectural design in view of addressing new experimental issues in HEP and other related physics disciplines. In the context of this project the use of generic programming techniques besides the conventional object oriented is investigated. Software design features and preliminary results from a new prototype implementation of Geant4 electromagnetic physics are illustrated. Performance evaluations are presented. Issues related to quality assurance in Geant4 physics modelling are discussed.

  16. A validated physical model of greenhouse climate.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the

  17. Numerical modelling in material physics

    International Nuclear Information System (INIS)

    Proville, L.

    2004-12-01

    The author first briefly presents his past research activities: investigation of a dislocation sliding in solid solution by molecular dynamics, modelling of metal film growth by phase field and Monte Carlo kinetics, phase field model for surface self-organisation, phase field model for the Al 3 Zr alloy, calculation of anharmonic photons, mobility of bipolarons in superconductors. Then, he more precisely reports the mesoscopic modelling in phase field, and some atomistic modelling (dislocation sliding, Monte Carlo simulation of metal surface growth, anharmonic network optical spectrum modelling)

  18. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  19. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  20. Metallographic techniques for evaluation of Thermal Barrier Coatings produced by Electron Beam Physical Vapor Deposition

    International Nuclear Information System (INIS)

    Kelly, Matthew; Singh, Jogender; Todd, Judith; Copley, Steven; Wolfe, Douglas

    2008-01-01

    Thermal Barrier Coatings (TBC) produced by Electron Beam Physical Vapor Deposition (EB-PVD) are primarily applied to critical hot section turbine components. EB-PVD TBC for turbine applications exhibit a complicated structure of porous ceramic columns separated by voids that offers mechanical compliance. Currently there are no standard evaluation methods for evaluating EB-PVD TBC structure quantitatively. This paper proposes a metallographic method for preparing samples and evaluating techniques to quantitatively measure structure. TBC samples were produced and evaluated with the proposed metallographic technique and digital image analysis for columnar grain size and relative intercolumnar porosity. Incorporation of the proposed evaluation technique will increase knowledge of the relation between processing parameters and material properties by incorporating a structural link. Application of this evaluation method will directly benefit areas of quality control, microstructural model development, and reduced development time for process scaling

  1. Problems in physical modeling of magnetic materials

    International Nuclear Information System (INIS)

    Della Torre, E.

    2004-01-01

    Physical modeling of magnetic materials should give insights into the basic processes involved and should be able to extrapolate results to new situations that the models were not necessarily intended to solve. Thus, for example, if a model is designed to describe a static magnetization curve, it should also be able to describe aspects of magnetization dynamics. Both micromagnetic modeling and Preisach modeling, the two most popular magnetic models, fulfill this requirement, but in the process of fulfilling this requirement, they both had to be modified in some ways. Hence, we should view physical modeling as an iterative process whereby we start with some simple assumptions and refine them as reality requires. In the process of refining these assumptions, we should try to appeal to physical arguments for the modifications, if we are to come up with good models. If we consider phenomenological models, on the other hand, that is as axiomatic models requiring no physical justification, we can follow them logically to see the end and examine the consequences of their assumptions. In this way, we can learn the properties, limitations and achievements of the particular model. Physical and phenomenological models complement each other in furthering our understanding of the behavior of magnetic materials

  2. Statistical and particle physics: Common problems and techniques

    International Nuclear Information System (INIS)

    Bowler, K.C.; Mc Kane, A.J.

    1984-01-01

    These proceedings contain statistical mechanical studies in condensed matter physics; interfacial problems in statistical physics; string theory; general monte carlo methods and their application to Lattice gauge theories; topological excitations in field theory; phase transformation kinetics; and studies of chaotic systems

  3. High precision Standard Model Physics

    International Nuclear Information System (INIS)

    Magnin, J.

    2009-01-01

    The main goal of the LHCb experiment, one of the four large experiments of the Large Hadron Collider, is to try to give answers to the question of why Nature prefers matter over antimatter? This will be done by studying the decay of b quarks and their antimatter partners, b-bar, which will be produced by billions in 14 TeV p-p collisions by the LHC. In addition, as 'beauty' particles mainly decay in charm particles, an interesting program of charm physics will be carried on, allowing to measure quantities as for instance the D 0 -D-bar 0 mixing, with incredible precision.

  4. Development of object-oriented software technique in field of high energy and nuclear physics

    International Nuclear Information System (INIS)

    Ye Yanlin; Ying Jun; Chen Tao

    1997-01-01

    The background for developing object-oriented software technique in high energy and nuclear physics has been introduced. The progress made at CERN and US has been outlined. The merit and future of various software techniques have been commented

  5. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  6. The Physical Internet and Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Diane Poulin

    2012-06-01

    Full Text Available Building on the analogy of data packets within the Digital Internet, the Physical Internet is a concept that dramatically transforms how physical objects are designed, manufactured, and distributed. This approach is open, efficient, and sustainable beyond traditional proprietary logistical solutions, which are often plagued by inefficiencies. The Physical Internet redefines supply chain configurations, business models, and value-creation patterns. Firms are bound to be less dependent on operational scale and scope trade-offs because they will be in a position to offer novel hybrid products and services that would otherwise destroy value. Finally, logistical chains become flexible and reconfigurable in real time, thus becoming better in tune with firm strategic choices. This article focuses on the potential impact of the Physical Internet on business model innovation, both from the perspectives of Physical-Internet enabled and enabling business models.

  7. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  8. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  9. Quark models in hadron physics

    International Nuclear Information System (INIS)

    Phatak, Shashikant C.

    2007-01-01

    In this talk, we review the role played by the quark models in the study of interaction of strong, weak and electromagnetic probes with hadrons at intermediate and high momentum transfers. By hadrons, we mean individual nucleons as well as nuclei. We argue that at these momentum transfers, the structure of hadrons plays an important role. The hadron structure of the hadrons is because of the underlying quark structure of hadrons and therefore the quark models play an important role in determining the hadron structure. Further, the properties of hadrons are likely to change when these are placed in nuclear medium and this change should arise from the underlying quark structure. We shall consider some quark models to look into these aspects. (author)

  10. Physics of the Quark Model

    Science.gov (United States)

    Young, Robert D.

    1973-01-01

    Discusses the charge independence, wavefunctions, magnetic moments, and high-energy scattering of hadrons on the basis of group theory and nonrelativistic quark model with mass spectrum calculated by first-order perturbation theory. The presentation is explainable to advanced undergraduate students. (CC)

  11. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  12. Short review of runoff and erosion physically based models

    Directory of Open Access Journals (Sweden)

    Gabrić Ognjen

    2015-01-01

    Full Text Available Processes of runoff and erosion are one of the main research subjects in hydrological science. Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Several models of runoff and erosion which describes entire process of genesis and sediment transport on the catchment are described and compared.

  13. Modeling Cyber Physical War Gaming

    Science.gov (United States)

    2017-08-07

    games share similar constructs. We also provide a game-theoretic approach to mathematically analyze attacker and defender strategies in cyber war...Military Practice of Course-of-Action Analysis 4 2. Game-Theoretic Method 7 2.1 Mathematical Model 7 2.2 Strategy Selection 10 2.2.1 Pure...officers, hundreds of combat and support vehicles, helicopters, sophisticated intelligence and communication equipment and specialists , artillery and

  14. Physics beyond the Standard Model

    Science.gov (United States)

    Lach, Theodore

    2011-04-01

    Recent discoveries of the excited states of the Bs** meson along with the discovery of the omega-b-minus have brought into popular acceptance the concept of the orbiting quarks predicted by the Checker Board Model (CBM) 14 years ago. Back then the concept of orbiting quarks was not fashionable. Recent estimates of velocities of these quarks inside the proton and neutron are in excess of 90% the speed of light also in agreement with the CBM model. Still a 2D structure of the nucleus has not been accepted nor has it been proven wrong. The CBM predicts masses of the up and dn quarks are 237.31 MeV and 42.392 MeV respectively and suggests that a lighter generation of quarks u and d make up a different generation of quarks that make up light mesons. The CBM also predicts that the T' and B' quarks do exist and are not as massive as might be expected. (this would make it a 5G world in conflict with the SM) The details of the CB model and prediction of quark masses can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/.

  15. ``Wow'' is good, but ``I see'' is better - techniques for more effective Physics demonstrations

    Science.gov (United States)

    Collins, Stephen

    2008-03-01

    The use of demonstrations to assist in Physics education at all levels is commonplace, but frequently lacks optimal effectiveness. In many cases, the choice of demonstration is not at issue, but rather the manner in which it is presented to the audience. Modern educational research reveals a number of simple ways to improve instruction of this kind, including objective setting, audience evaluation, concept building, and promoting engagement. These techniques and considerations will be reviewed, explained, and modeled through a demonstration of ``Why Mr. Fork and Mr. Microwave Oven don't get along.''

  16. Ladder physics in the spin fermion model

    Science.gov (United States)

    Tsvelik, A. M.

    2017-05-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. It is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d -Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface. Hence, the SF model provides an adequate description of the pseudogap.

  17. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  18. Behaviour change techniques in physical activity interventions for men with prostate cancer: A systematic review.

    Science.gov (United States)

    Hallward, Laura; Patel, Nisha; Duncan, Lindsay R

    2018-02-01

    Physical activity interventions can improve prostate cancer survivors' health. Determining the behaviour change techniques used in physical activity interventions can help elucidate the mechanisms by which an intervention successfully changes behaviour. The purpose of this systematic review was to identify and evaluate behaviour change techniques in physical activity interventions for prostate cancer survivors. A total of 7 databases were searched and 15 studies were retained. The studies included a mean 6.87 behaviour change techniques (range = 3-10), and similar behaviour change techniques were implemented in all studies. Consideration of how behaviour change techniques are implemented may help identify how behaviour change techniques enhance physical activity interventions for prostate cancer survivors.

  19. Modelling Mathematical Reasoning in Physics Education

    Science.gov (United States)

    Uhden, Olaf; Karam, Ricardo; Pietrocola, Maurício; Pospiech, Gesche

    2012-04-01

    Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.

  20. Physical and chemical properties of gels. Application to protein nucleation control in the gel acupuncture technique

    Science.gov (United States)

    Moreno, Abel; Juárez-Martínez, Gabriela; Hernández-Pérez, Tomás; Batina, Nikola; Mundo, Manuel; McPherson, Alexander

    1999-09-01

    In this work, we present a new approach using analytical and optical techniques in order to determine the physical and chemical properties of silica gel, as well as the measurement of the pore size in the network of the gel by scanning electron microscopy. The gel acupuncture technique developed by García-Ruiz et al. (Mater. Res. Bull 28 (1993) 541) García-Ruiz and Moreno (Acta Crystallogr. D 50 (1994) 484) was used throughout the history of crystal growth. Several experiments were done in order to evaluate the nucleation control of model proteins (thaumatin I from Thaumatococcus daniellii, lysozyme from hen egg white and catalase from bovine liver) by the porous network of the gel. Finally, it is shown how the number and the size of the crystals obtained inside X-ray capillaries is controlled by the size of the porous structure of the gel.

  1. Experimental techniques and physics in a polarized storage ring

    International Nuclear Information System (INIS)

    Dueren, M.

    1995-01-01

    In May 1994 spin rotators were brought into operation at HERA and for the first time longitudinal electron polarization was produced in a high energy storage ring. A Compton polarimeter is used for empirical optimization of the polarization to values of up to 70%. HERMES makes use of the stored polarized beam with an internal polarized target. The density of a gas target is increased by a storage cell by two orders of magnitude compared to a free gas jet. Data taking begins in 1995 with measurements on polarized spin structure functions and also on semi-inclusive polarized hadron production. The inclusive physics program is in competition with experiments at CERN and SLAC. The semi-inclusive physics program promises to solve basic questions of the spin structure of matter by decomposing the spin contributions of the different quark flavors. (author) 24 figs., 3 tabs., 44 refs

  2. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  3. Waste Feed Evaporation Physical Properties Modeling

    International Nuclear Information System (INIS)

    Daniel, W.E.

    2003-01-01

    This document describes the waste feed evaporator modeling work done in the Waste Feed Evaporation and Physical Properties Modeling test specification and in support of the Hanford River Protection Project (RPP) Waste Treatment Plant (WTP) project. A private database (ZEOLITE) was developed and used in this work in order to include the behavior of aluminosilicates such a NAS-gel in the OLI/ESP simulations, in addition to the development of the mathematical models. Mathematical models were developed that describe certain physical properties in the Hanford RPP-WTP waste feed evaporator process (FEP). In particular, models were developed for the feed stream to the first ultra-filtration step characterizing its heat capacity, thermal conductivity, and viscosity, as well as the density of the evaporator contents. The scope of the task was expanded to include the volume reduction factor across the waste feed evaporator (total evaporator feed volume/evaporator bottoms volume). All the physical properties were modeled as functions of the waste feed composition, temperature, and the high level waste recycle volumetric flow rate relative to that of the waste feed. The goal for the mathematical models was to predict the physical property to predicted simulation value. The simulation model approximating the FEP process used to develop the correlations was relatively complex, and not possible to duplicate within the scope of the bench scale evaporation experiments. Therefore, simulants were made of 13 design points (a subset of the points used in the model fits) using the compositions of the ultra-filtration feed streams as predicted by the simulation model. The chemistry and physical properties of the supernate (the modeled stream) as predicted by the simulation were compared with the analytical results of experimental simulant work as a method of validating the simulation software

  4. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  5. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  6. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  7. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  8. Experimental techniques and physics in a polarized storage ring

    International Nuclear Information System (INIS)

    Dueren, M.

    1994-12-01

    In May 1994 spin rotators were brought into operation at HERA and for the first time longitudinal electron polarization was produced in a high energy storage ring. A Compton polarimeter is used for optimization of the polarization to values of up to 70%. HERMES is a new experiment designed to study the spin structure of the nucleon by deep inelastic scattering from the proton and neutron using the longitudinally polarized electron beam at HERA and internal polarized gas targets. The density of the gas targets is increased by a storage cell by two orders of magnitude compared to a free gas jet. Data taking begins in 1995 with measurements on polarized spin structure functions and also on semi-inclusive polarized hadron production. The inclusive physics program is in competition with experiments at CERN and SLAC. The semi-inclusive physics program promises to solve basic questions of the spin structure of matter by decomposing the spin contributions of the different quark flavors. (orig.)

  9. Kernel and divergence techniques in high energy physics separations

    Science.gov (United States)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  10. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  11. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  12. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-11-01

    The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  13. Composing Models of Geographic Physical Processes

    Science.gov (United States)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  14. Positron emission tomography, physical bases and comparaison with other techniques

    International Nuclear Information System (INIS)

    Guermazi, Fadhel; Hamza, F; Amouri, W.; Charfeddine, S.; Kallel, S.; Jardak, I.

    2013-01-01

    Positron emission tomography (PET) is a medical imaging technique that measures the three-dimensional distribution of molecules marked by a positron-emitting particle. PET has grown significantly in clinical fields, particularly in oncology for diagnosis and therapeutic follow purposes. The technical evolutions of this technique are fast. Among the technical improvements, is the coupling of the PET scan with computed tomography (CT). PET is obtained by intravenous injection of a radioactive tracer. The marker is usually fluorine ( 18 F) embedded in a glucose molecule forming the 18-fluorodeoxyglucose (FDG-18). This tracer, similar to glucose, binds to tissues that consume large quantities of the sugar such cancerous tissue, cardiac muscle or brain. Detection using scintillation crystals (BGO, LSO, LYSO) suitable for high energy (511keV) recognizes the lines of the gamma photons originating from the annihilation of a positron with an electron. The electronics of detection or coincidence circuit is based on two criteria: a time window, of about 6 to 15 ns, and an energy window. This system measures the true coincidences that correspond to the detection of two photons of 511 kV from the same annihilation. Most PET devices are constituted by a series of elementary detectors distributed annularly around the patient. Each detector comprises a scintillation crystal matrix coupled to a finite number (4 or 6) of photomultipliers. The electronic circuit, or the coincidence circuit, determines the projection point of annihilation by means of two elementary detectors. The processing of such information must be extremely fast, considering the count rates encountered in practice. The information measured by the coincidence circuit is then positioned in a matrix or sinogram, which contains a set of elements of a projection section of the object. Images are obtained by tomographic reconstruction by powerful computer stations equipped with a software tools allowing the analysis and

  15. PHYSICAL EDUCATION - PHYSICAL CULTURE. TWO MODELS, TWO DIDACTIC

    Directory of Open Access Journals (Sweden)

    Manuel Vizuete Carrizosa

    2014-10-01

    Full Text Available Physical Education is currently facing a number of problems that are rooted in the identity crisis prompted by the spread of the professional group, the confrontation of ideas from the scientific community and the competing interests of different political and social areas, compared to which physical education has failed, or unable, to react in time. The political and ideological confrontation that characterized the twentieth century gave us two forms, each with a consistent ideological position, in which the body as a subject of education was understood from two different positions: one set from the left and communism and another, from Western democratic societies.The survival of these conflicting positions and their interests and different views on education, in a lengthy space of time, as a consequence threw two teaching approaches and two different educational models, in which the objectives and content of education differ , and with them the forms and methods of teaching. The need to define the cultural and educational approach, in every time and place, is now a pressing need and challenge the processes of teacher training, as responsible for shaping an advanced physical education, adjusted to the time and place, the interests and needs of citizens and the democratic values of modern society.

  16. Application of physical separation techniques in uranium resources processing

    International Nuclear Information System (INIS)

    Padmanabhan, N.P.H.; Sreenivas, T.

    2008-01-01

    The planned economic growth of our country and energy security considerations call for increasing the overall electricity generating capabilities with substantial increase in the zero-carbon and clean nuclear power component. Although India is endowed with vast resources of thorium, its utilization can commence only after the successful completion of the first two stages of nuclear power programme, which use natural uranium in the first stage and natural uranium plus plutonium in the second stage. For the successful operation of first stage, exploration and exploitation activities for uranium should be vigorously followed. This paper reviews the current status of physical beneficiation in processing of uranium ores and discusses its applicability to recover uranium from low grade and below-cut-off grade ores in Indian context. (author)

  17. Model-Based Dependability Analysis of Physical Systems with Modelica

    Directory of Open Access Journals (Sweden)

    Andrea Tundis

    2017-01-01

    Full Text Available Modelica is an innovative, equation-based, and acausal language that allows modeling complex physical systems, which are made of mechanical, electrical, and electrotechnical components, and evaluates their design through simulation techniques. Unfortunately, the increasing complexity and accuracy of such physical systems require new, more powerful, and flexible tools and techniques for evaluating important system properties and, in particular, the dependability ones such as reliability, safety, and maintainability. In this context, the paper describes some extensions of the Modelica language to support the modeling of system requirements and their relationships. Such extensions enable the requirement verification analysis through native constructs in the Modelica language. Furthermore, they allow exporting a Modelica-based system design as a Bayesian Network in order to analyze its dependability by employing a probabilistic approach. The proposal is exemplified through a case study concerning the dependability analysis of a Tank System.

  18. Physical models for high burnup fuel

    International Nuclear Information System (INIS)

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  19. The optical model in atomic physics

    International Nuclear Information System (INIS)

    McCarthy, I.E.

    1978-01-01

    The optical model for electron scattering on atoms has quite a short history in comparison with nuclear physics. The main reason for this is that there were insufficient data. Angular distribution for elastic and some inelastic scattering have now been measured for the atoms which exist in gaseous form at reasonable temperatures, inert gases, hydrogen, alkalies and mercury being the main ones out in. The author shows that the optical model makes sense in atomic physics by considering its theory and recent history. (orig./AH) [de

  20. Simple smoothing technique to reduce data scattering in physics experiments

    International Nuclear Information System (INIS)

    Levesque, L

    2008-01-01

    This paper describes an experiment involving motorized motion and a method to reduce data scattering from data acquisition. Jitter or minute instrumental vibrations add noise to a detected signal, which often renders small modulations of a graph very difficult to interpret. Here we describe a method to reduce scattering amongst data points from the signal measured by a photodetector that is motorized and scanned in a direction parallel to the plane of a rectangular slit during a computer-controlled diffraction experiment. The smoothing technique is investigated using subsets of many data points from the data acquisition. A limit for the number of data points in a subset is determined from the results based on the trend of the small measured signal to avoid severe changes in the shape of the signal from the averaging procedure. This simple smoothing method can be achieved using any type of spreadsheet software

  1. Microscale Shock Wave Physics Using Photonic Driver Techniques; TOPICAL

    International Nuclear Information System (INIS)

    SETCHELL, ROBERT E.; TROTT, WAYNE M.; CASTANEDA, JAIME N.; FARNSWORTH JR.,A. V.; BERRY, DANTE M.

    2002-01-01

    This report summarizes a multiyear effort to establish a new capability for determining dynamic material properties. By utilizing a significant reduction in experimental length and time scales, this new capability addresses both the high per-experiment costs of current methods and the inability of these methods to characterize materials having very small dimensions. Possible applications include bulk-processed materials with minimal dimensions, very scarce or hazardous materials, and materials that can only be made with microscale dimensions. Based on earlier work to develop laser-based techniques for detonating explosives, the current study examined the laser acceleration, or photonic driving, of small metal discs (''flyers'') that can generate controlled, planar shockwaves in test materials upon impact. Sub-nanosecond interferometric diagnostics were developed previously to examine the motion and impact of laser-driven flyers. To address a broad range of materials and stress states, photonic driving levels must be scaled up considerably from the levels used in earlier studies. Higher driving levels, however, increase concerns over laser-induced damage in optics and excessive heating of laser-accelerated materials. Sufficiently high levels require custom beam-shaping optics to ensure planar acceleration of flyers. The present study involved the development and evaluation of photonic driving systems at two driving levels, numerical simulations of flyer acceleration and impact using the CTH hydrodynamics code, design and fabrication of launch assemblies, improvements in diagnostic instrumentation, and validation experiments on both bulk and thin-film materials having well-established shock properties. The primary conclusion is that photonic driving techniques are viable additions to the methods currently used to obtain dynamic material properties. Improvements in launch conditions and diagnostics can certainly be made, but the main challenge to future applications

  2. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  3. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  4. A validated physical model of greenhouse climate

    International Nuclear Information System (INIS)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the greenhouse and of the control system. The greenhouse model is based on the energy, water vapour and CO 2 balances of the crop-greenhouse system. While the emphasis is on the dynamic behaviour of the greenhouse for implementation in continuous optimization, the state variables temperature, water vapour pressure and carbondioxide concentration in the relevant greenhouse parts crop, air, soil and cover are calculated from the balances over these parts. To do this in a proper way, the physical exchange processes between the system parts have to be quantified first. Therefore the greenhouse model is constructed from submodels describing these processes: a. Radiation transmission model for the modification of the outside to the inside global radiation. b. Ventilation model to describe the ventilation exchange between greenhouse and outside air. c. The description of the exchange of energy and mass between the crop and the greenhouse air. d. Calculation of the thermal radiation exchange between the various greenhouse parts. e. Quantification of the convective exchange processes between the greenhouse air and respectively the cover, the heating pipes and the soil surface and between the cover and the outside air. f. Determination of the heat conduction in the soil. The various submodels are validated first and then the complete greenhouse model is verified

  5. Topos models for physics and topos theory

    International Nuclear Information System (INIS)

    Wolters, Sander

    2014-01-01

    What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a “quantum logic” in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos

  6. Ladder physics in the spin fermion model

    International Nuclear Information System (INIS)

    Tsvelik, A. M.

    2017-01-01

    A link is established between the spin fermion (SF) model of the cuprates and the approach based on the analogy between the physics of doped Mott insulators in two dimensions and the physics of fermionic ladders. This enables one to use nonperturbative results derived for fermionic ladders to move beyond the large-N approximation in the SF model. Here, it is shown that the paramagnon exchange postulated in the SF model has exactly the right form to facilitate the emergence of the fully gapped d-Mott state in the region of the Brillouin zone at the hot spots of the Fermi surface. Hence, the SF model provides an adequate description of the pseudogap.

  7. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  8. Mathematical and physical models and radiobiology

    International Nuclear Information System (INIS)

    Lokajicek, M.

    1980-01-01

    The hit theory of the mechanism of biological radiation effects in the cell is discussed with respect to radiotherapy. The mechanisms of biological effects and of intracellular recovery, the cumulative radiation effect and the cumulative biological effect in fractionated irradiation are described. The benefit is shown of consistent application of mathematical and physical models in radiobiology and radiotherapy. (J.P.)

  9. Dilution physics modeling: Dissolution/precipitation chemistry

    International Nuclear Information System (INIS)

    Onishi, Y.; Reid, H.C.; Trent, D.S.

    1995-09-01

    This report documents progress made to date on integrating dilution/precipitation chemistry and new physical models into the TEMPEST thermal-hydraulics computer code. Implementation of dissolution/precipitation chemistry models is necessary for predicting nonhomogeneous, time-dependent, physical/chemical behavior of tank wastes with and without a variety of possible engineered remediation and mitigation activities. Such behavior includes chemical reactions, gas retention, solids resuspension, solids dissolution and generation, solids settling/rising, and convective motion of physical and chemical species. Thus this model development is important from the standpoint of predicting the consequences of various engineered activities, such as mitigation by dilution, retrieval, or pretreatment, that can affect safe operations. The integration of a dissolution/precipitation chemistry module allows the various phase species concentrations to enter into the physical calculations that affect the TEMPEST hydrodynamic flow calculations. The yield strength model of non-Newtonian sludge correlates yield to a power function of solids concentration. Likewise, shear stress is concentration-dependent, and the dissolution/precipitation chemistry calculations develop the species concentration evolution that produces fluid flow resistance changes. Dilution of waste with pure water, molar concentrations of sodium hydroxide, and other chemical streams can be analyzed for the reactive species changes and hydrodynamic flow characteristics

  10. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  11. Physical models for classroom teaching in hydrology

    Directory of Open Access Journals (Sweden)

    A. Rodhe

    2012-09-01

    Full Text Available Hydrology teaching benefits from the fact that many important processes can be illustrated and explained with simple physical models. A set of mobile physical models has been developed and used during many years of lecturing at basic university level teaching in hydrology. One model, with which many phenomena can be demonstrated, consists of a 1.0-m-long plexiglass container containing an about 0.25-m-deep open sand aquifer through which water is circulated. The model can be used for showing the groundwater table and its influence on the water content in the unsaturated zone and for quantitative determination of hydraulic properties such as the storage coefficient and the saturated hydraulic conductivity. It is also well suited for discussions on the runoff process and the significance of recharge and discharge areas for groundwater. The flow paths of water and contaminant dispersion can be illustrated in tracer experiments using fluorescent or colour dye. This and a few other physical models, with suggested demonstrations and experiments, are described in this article. The finding from using models in classroom teaching is that it creates curiosity among the students, promotes discussions and most likely deepens the understanding of the basic processes.

  12. Adaptability of laser diffraction measurement technique in soil physics methodology

    Science.gov (United States)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  13. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  14. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  15. Report of the B-factory Group: 1, Physics and techniques

    International Nuclear Information System (INIS)

    Feldman, G.J.; Cassel, D.G.; Siemann, R.H.

    1989-01-01

    The study of B meson decay appears to offer a unique opportunity to measure basic parameters of the Standard Model, probe for interactions mediated by higher mass particles, and investigate the origin of CP violation. These opportunities have been enhanced by the results of two measurements. The first is the measurement of a long B meson lifetime. In addition to allowing a simpler identification of B mesons and a measurement of the time of their decay, this observation implies that normal decays are suppressed, making rare decays more prevalent. The second measurement is that neutral B mesons are strongly mixed. This enhances the possibilities for studying CP violation in the B system. The CESR storage ring is likely to dominate the study of B physics in e + e/sup /minus// annihilations for about the next five years. First, CESR has already reached a luminosity of 10 32 cm/sup /minus/1/ sec/sup /minus/1/ and has plans for improvements which may increase the luminosity by a factor of about five. Second, a second-generation detector, CLEO II, will start running in 1989. Given this background, the main focus of this working group was to ask what is needed for the mid- to late-1990 s. Many laboratories are thinking about new facilities involving a variety of techniques. To help clarify the choices, we focused on one example of CP violation and estimated the luminosity required to measure it using different techniques. We will briefly describe the requirements for detectors matched to these techniques. In particular, we will give a conceptual design of a possible detector for asymmetric collisions at the Υ(4S) resonance, one of the attractive techniques which will emerge from this study. A discussion of accelerator technology issues for using these techniques forms the second half of the B-factory Group report, and it follows in these proceedings. 34 refs., 2 figs., 2 tabs

  16. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  17. Proceedings of the Third National Conference on Nuclear Physics and Techniques

    International Nuclear Information System (INIS)

    Nguyen Thanh Binh; Nguyen Nhi Dien; Tran Kim Hung; Vuong Huu Tan

    2000-01-01

    The proceedings contains 130 papers of scientists from institutes, universities, enterprises nation-wide in Vietnam. Its subjects include: nuclear physics, theoretical physics, science and technology of nuclear reactor, application of nuclear techniques in industry, agriculture, biology, medicine, geo-hydrology, environmental protection, nuclear equipment, radiation technology, material technology, waste management, ect

  18. Prototyping of cerebral vasculature physical models.

    Science.gov (United States)

    Khan, Imad S; Kelly, Patrick D; Singer, Robert J

    2014-01-01

    Prototyping of cerebral vasculature models through stereolithographic methods have the ability to accurately depict the 3D structures of complicated aneurysms with high accuracy. We describe the method to manufacture such a model and review some of its uses in the context of treatment planning, research, and surgical training. We prospectively used the data from the rotational angiography of a 40-year-old female who presented with an unruptured right paraclinoid aneurysm. The 3D virtual model was then converted to a physical life-sized model. The model constructed was shown to be a very accurate depiction of the aneurysm and its associated vasculature. It was found to be useful, among other things, for surgical training and as a patient education tool. With improving and more widespread printing options, these models have the potential to become an important part of research and training modalities.

  19. Physical aspects of quantitative particles analysis by X-ray fluorescence and electron microprobe techniques

    International Nuclear Information System (INIS)

    Markowicz, A.

    1986-01-01

    The aim of this work is to present both physical fundamentals and recent advances in quantitative particles analysis by X-ray fluorescence (XRF) and electron microprobe (EPXMA) techniques. A method of correction for the particle-size effect in XRF analysis is described and theoretically evaluated. New atomic number- and absorption correction procedures in EPXMA of individual particles are proposed. The applicability of these two correction methods is evaluated for a wide range of elemental composition, X-ray energy and sample thickness. Also, a theoretical model for composition and thickness dependence of Bremsstrahlung background generated in multielement bulk specimens as well as thin films and particles are presented and experimantally evaluated. Finally, the limitations and further possible improvements in quantitative particles analysis by XFR and EPXMA are discussed. 109 refs. (author)

  20. String Theory - The Physics of String-Bending and Other Electric Guitar Techniques

    Science.gov (United States)

    Grimes, David Robert

    2014-01-01

    Electric guitar playing is ubiquitous in practically all modern music genres. In the hands of an experienced player, electric guitars can sound as expressive and distinct as a human voice. Unlike other more quantised instruments where pitch is a discrete function, guitarists can incorporate micro-tonality and, as a result, vibrato and sting-bending are idiosyncratic hallmarks of a player. Similarly, a wide variety of techniques unique to the electric guitar have emerged. While the mechano-acoustics of stringed instruments and vibrating strings are well studied, there has been comparatively little work dedicated to the underlying physics of unique electric guitar techniques and strings, nor the mechanical factors influencing vibrato, string-bending, fretting force and whammy-bar dynamics. In this work, models for these processes are derived and the implications for guitar and string design discussed. The string-bending model is experimentally validated using a variety of strings and vibrato dynamics are simulated. The implications of these findings on the configuration and design of guitars is also discussed. PMID:25054880

  1. String theory--the physics of string-bending and other electric guitar techniques.

    Directory of Open Access Journals (Sweden)

    David Robert Grimes

    Full Text Available Electric guitar playing is ubiquitous in practically all modern music genres. In the hands of an experienced player, electric guitars can sound as expressive and distinct as a human voice. Unlike other more quantised instruments where pitch is a discrete function, guitarists can incorporate micro-tonality and, as a result, vibrato and sting-bending are idiosyncratic hallmarks of a player. Similarly, a wide variety of techniques unique to the electric guitar have emerged. While the mechano-acoustics of stringed instruments and vibrating strings are well studied, there has been comparatively little work dedicated to the underlying physics of unique electric guitar techniques and strings, nor the mechanical factors influencing vibrato, string-bending, fretting force and whammy-bar dynamics. In this work, models for these processes are derived and the implications for guitar and string design discussed. The string-bending model is experimentally validated using a variety of strings and vibrato dynamics are simulated. The implications of these findings on the configuration and design of guitars is also discussed.

  2. String theory--the physics of string-bending and other electric guitar techniques.

    Science.gov (United States)

    Grimes, David Robert

    2014-01-01

    Electric guitar playing is ubiquitous in practically all modern music genres. In the hands of an experienced player, electric guitars can sound as expressive and distinct as a human voice. Unlike other more quantised instruments where pitch is a discrete function, guitarists can incorporate micro-tonality and, as a result, vibrato and sting-bending are idiosyncratic hallmarks of a player. Similarly, a wide variety of techniques unique to the electric guitar have emerged. While the mechano-acoustics of stringed instruments and vibrating strings are well studied, there has been comparatively little work dedicated to the underlying physics of unique electric guitar techniques and strings, nor the mechanical factors influencing vibrato, string-bending, fretting force and whammy-bar dynamics. In this work, models for these processes are derived and the implications for guitar and string design discussed. The string-bending model is experimentally validated using a variety of strings and vibrato dynamics are simulated. The implications of these findings on the configuration and design of guitars is also discussed.

  3. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  4. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  5. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  6. Assessing physical models used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Computer codes used to predict the behaviour of aerosols in water-cooled reactor containment buildings after severe accidents contain a variety of physical models. Special models are in place for describing agglomeration processes where small aerosol particles combine to form larger ones. Other models are used to calculate the rates at which aerosol particles are deposited on building structures. Condensation of steam on aerosol particles is currently a very active area in aerosol modelling. In this paper, the physical models incorporated in the current available international codes for all of these processes are reviewed and documented. There is considerable variation in models used in different codes, and some uncertainties exist as to which models are superior. 28 refs

  7. Testing the standard model of particle physics using lattice QCD

    International Nuclear Information System (INIS)

    Water, Ruth S van de

    2007-01-01

    Recent advances in both computers and algorithms now allow realistic calculations of Quantum Chromodynamics (QCD) interactions using the numerical technique of lattice QCD. The methods used in so-called '2+1 flavor' lattice calculations have been verified both by post-dictions of quantities that were already experimentally well-known and by predictions that occurred before the relevant experimental determinations were sufficiently precise. This suggests that the sources of systematic error in lattice calculations are under control, and that lattice QCD can now be reliably used to calculate those weak matrix elements that cannot be measured experimentally but are necessary to interpret the results of many high-energy physics experiments. These same calculations also allow stringent tests of the Standard Model of particle physics, and may therefore lead to the discovery of new physics in the future

  8. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. B physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Hewett, J.A.L.

    1997-12-01

    The ability of present and future experiments to test the Standard Model in the B meson sector is described. The authors examine the loop effects of new interactions in flavor changing neutral current B decays and in Z → b anti b, concentrating on supersymmetry and the left-right symmetric model as specific examples of new physics scenarios. The procedure for performing a global fit to the Wilson coefficients which describe b → s transitions is outlined, and the results of such a fit from Monte Carlo generated data is compared to the predictions of the two sample new physics scenarios. A fit to the Zb anti b couplings from present data is also given

  11. A minimal physical model for crawling cells

    Science.gov (United States)

    Tiribocchi, Adriano; Tjhung, Elsen; Marenduzzo, Davide; Cates, Michael E.

    Cell motility in higher organisms (eukaryotes) is fundamental to biological functions such as wound healing or immune response, and is also implicated in diseases such as cancer. For cells crawling on solid surfaces, considerable insights into motility have been gained from experiments replicating such motion in vitro. Such experiments show that crawling uses a combination of actin treadmilling (polymerization), which pushes the front of a cell forward, and myosin-induced stress (contractility), which retracts the rear. We present a simplified physical model of a crawling cell, consisting of a droplet of active polar fluid with contractility throughout, but treadmilling connected to a thin layer near the supporting wall. The model shows a variety of shapes and/or motility regimes, some closely resembling cases seen experimentally. Our work supports the view that cellular motility exploits autonomous physical mechanisms whose operation does not need continuous regulatory effort.

  12. LHC Higgs physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Spannowsky, M.

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan β in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  13. LHC Higgs physics beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Spannowsky, M.

    2007-09-22

    The Large Hadron Collider (LHC) at CERN will be able to perform proton collisions at a much higher center-of-mass energy and luminosity than any other collider. Its main purpose is to detect the Higgs boson, the last unobserved particle of the Standard Model, explaining the riddle of the origin of mass. Studies have shown, that for the whole allowed region of the Higgs mass processes exist to detect the Higgs at the LHC. However, the Standard Model cannot be a theory of everything and is not able to provide a complete understanding of physics. It is at most an effective theory up to a presently unknown energy scale. Hence, extensions of the Standard Model are necessary which can affect the Higgs-boson signals. We discuss these effects in two popular extensions of the Standard Model: the Minimal Supersymmetric Standard Model (MSSM) and the Standard Model with four generations (SM4G). Constraints on these models come predominantly from flavor physics and electroweak precision measurements. We show, that the SM4G is still viable and that a fourth generation has strong impact on decay and production processes of the Higgs boson. Furthermore, we study the charged Higgs boson in the MSSM, yielding a clear signal for physics beyond the Standard Model. For small tan {beta} in minimal flavor violation (MFV) no processes for the detection of a charged Higgs boson do exist at the LHC. However, MFV is just motivated by the experimental agreement of results from flavor physics with Standard Model predictions, but not by any basic theoretical consideration. In this thesis, we calculate charged Higgs boson production cross sections beyond the assumption of MFV, where a large number of free parameters is present in the MSSM. We find that the soft-breaking parameters which enhance the charged-Higgs boson production most are just bound to large values, e.g. by rare B-meson decays. Although the charged-Higgs boson cross sections beyond MFV turn out to be sizeable, only a detailed

  14. Looking for physics beyond the standard model

    International Nuclear Information System (INIS)

    Binetruy, P.

    2002-01-01

    Motivations for new physics beyond the Standard Model are presented. The most successful and best motivated option, supersymmetry, is described in some detail, and the associated searches performed at LEP are reviewed. These include searches for additional Higgs bosons and for supersymmetric partners of the standard particles. These searches constrain the mass of the lightest supersymmetric particle which could be responsible for the dark matter of the universe. (authors)

  15. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  16. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  17. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  18. Generomak: Fusion physics, engineering and costing model

    International Nuclear Information System (INIS)

    Delene, J.G.; Krakowski, R.A.; Sheffield, J.; Dory, R.A.

    1988-06-01

    A generic fusion physics, engineering and economics model (Generomak) was developed as a means of performing consistent analysis of the economic viability of alternative magnetic fusion reactors. The original Generomak model developed at Oak Ridge by Sheffield was expanded for the analyses of the Senior Committee on Environmental Safety and Economics of Magnetic Fusion Energy (ESECOM). This report describes the Generomak code as used by ESECOM. The input data used for each of the ten ESECOM fusion plants and the Generomak code output for each case is given. 14 refs., 3 figs., 17 tabs

  19. Gyrofluid Modeling of Turbulent, Kinetic Physics

    Science.gov (United States)

    Despain, Kate Marie

    2011-12-01

    Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.

  20. Agent-Based Models in Social Physics

    Science.gov (United States)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.

  1. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  2. A student's guide to Python for physical modeling

    CERN Document Server

    Kinder, Jesse M

    2015-01-01

    Python is a computer programming language that is rapidly gaining popularity throughout the sciences. A Student’s Guide to Python for Physical Modeling aims to help you, the student, teach yourself enough of the Python programming language to get started with physical modeling. You will learn how to install an open-source Python programming environment and use it to accomplish many common scientific computing tasks: importing, exporting, and visualizing data; numerical analysis; and simulation. No prior programming experience is assumed. This tutorial focuses on fundamentals and introduces a wide range of useful techniques, including: Basic Python programming and scripting Numerical arrays Two- and three-dimensional graphics Monte Carlo simulations Numerical methods, including solving ordinary differential equations Image processing Animation Numerous code samples and exercises—with solutions—illustrate new ideas as they are introduced. A website that accompanies this guide provides additional resourc...

  3. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  4. Modellus: Learning Physics with Mathematical Modelling

    Science.gov (United States)

    Teodoro, Vitor

    Computers are now a major tool in research and development in almost all scientific and technological fields. Despite recent developments, this is far from true for learning environments in schools and most undergraduate studies. This thesis proposes a framework for designing curricula where computers, and computer modelling in particular, are a major tool for learning. The framework, based on research on learning science and mathematics and on computer user interface, assumes that: 1) learning is an active process of creating meaning from representations; 2) learning takes place in a community of practice where students learn both from their own effort and from external guidance; 3) learning is a process of becoming familiar with concepts, with links between concepts, and with representations; 4) direct manipulation user interfaces allow students to explore concrete-abstract objects such as those of physics and can be used by students with minimal computer knowledge. Physics is the science of constructing models and explanations about the physical world. And mathematical models are an important type of models that are difficult for many students. These difficulties can be rooted in the fact that most students do not have an environment where they can explore functions, differential equations and iterations as primary objects that model physical phenomena--as objects-to-think-with, reifying the formal objects of physics. The framework proposes that students should be introduced to modelling in a very early stage of learning physics and mathematics, two scientific areas that must be taught in very closely related way, as they were developed since Galileo and Newton until the beginning of our century, before the rise of overspecialisation in science. At an early stage, functions are the main type of objects used to model real phenomena, such as motions. At a later stage, rates of change and equations with rates of change play an important role. This type of equations

  5. Physics Beyond the Standard Model: Supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, M.M.; /KEK, Tsukuba /Tsukuba, Graduate U. Adv. Studies /Tokyo U.; Plehn, T.; /Edinburgh U.; Polesello, G.; /INFN, Pavia; Alexander, John M.; /Edinburgh U.; Allanach, B.C.; /Cambridge U.; Barr, Alan J.; /Oxford U.; Benakli, K.; /Paris U., VI-VII; Boudjema, F.; /Annecy, LAPTH; Freitas, A.; /Zurich U.; Gwenlan, C.; /University Coll. London; Jager, S.; /CERN /LPSC, Grenoble

    2008-02-01

    This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.

  6. Models in Physics, Models for Physics Learning, and Why the Distinction May Matter in the Case of Electric Circuits

    Science.gov (United States)

    Hart, Christina

    2008-01-01

    Models are important both in the development of physics itself and in teaching physics. Historically, the consensus models of physics have come to embody particular ontological assumptions and epistemological commitments. Educators have generally assumed that the consensus models of physics, which have stood the test of time, will also work well…

  7. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  8. 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016)

    International Nuclear Information System (INIS)

    2016-01-01

    Preface The 2016 version of the International Workshop on Advanced Computing and Analysis Techniques in Physics Research took place on January 18-22, 2016, at the Universidad Técnica Federico Santa Maria -UTFSM- in Valparaiso, Chile. The present volume of IOP Conference Series is devoted to the selected scientific contributions presented at the workshop. In order to guarantee the scientific quality of the Proceedings all papers were thoroughly peer-reviewed by an ad-hoc Editorial Committee with the help of many careful reviewers. The ACAT Workshop series has a long tradition starting in 1990 (Lyon, France), and takes place in intervals of a year and a half. Formerly these workshops were known under the name AIHENP (Artificial Intelligence for High Energy and Nuclear Physics). Each edition brings together experimental and theoretical physicists and computer scientists/experts, from particle and nuclear physics, astronomy and astrophysics in order to exchange knowledge and experience in computing and data analysis in physics. Three tracks cover the main topics: Computing technology: languages and system architectures. Data analysis: algorithms and tools. Theoretical Physics: techniques and methods. Although most contributions and discussions are related to particle physics and computing, other fields like condensed matter physics, earth physics, biophysics are often addressed in the hope to share our approaches and visions. It created a forum for exchanging ideas among fields, exploring and promoting cutting-edge computing technologies and debating hot topics. (paper)

  9. Physical model for membrane protrusions during spreading

    International Nuclear Information System (INIS)

    Chamaraux, F; Ali, O; Fourcade, B; Keller, S; Bruckert, F

    2008-01-01

    During cell spreading onto a substrate, the kinetics of the contact area is an observable quantity. This paper is concerned with a physical approach to modeling this process in the case of ameboid motility where the membrane detaches itself from the underlying cytoskeleton at the leading edge. The physical model we propose is based on previous reports which highlight that membrane tension regulates cell spreading. Using a phenomenological feedback loop to mimic stress-dependent biochemistry, we show that the actin polymerization rate can be coupled to the stress which builds up at the margin of the contact area between the cell and the substrate. In the limit of small variation of membrane tension, we show that the actin polymerization rate can be written in a closed form. Our analysis defines characteristic lengths which depend on elastic properties of the membrane–cytoskeleton complex, such as the membrane–cytoskeleton interaction, and on molecular parameters, the rate of actin polymerization. We discuss our model in the case of axi-symmetric and non-axi-symmetric spreading and we compute the characteristic time scales as a function of fundamental elastic constants such as the strength of membrane–cytoskeleton adherence

  10. Beyond the standard model with B and K physics

    International Nuclear Information System (INIS)

    Grossman, Y

    2003-01-01

    In the first part of the talk the flavor physics input to models beyond the standard model is described. One specific example of such new physics model is given: A model with bulk fermions in a non factorizable one extra dimension. In the second part of the talk we discuss several observables that are sensitive to new physics. We explain what type of new physics can produce deviations from the standard model predictions in each of these observables

  11. Effect of wheelchair mass, tire type and tire pressure on physical strain and wheelchair propulsion technique.

    Science.gov (United States)

    de Groot, Sonja; Vegter, Riemer J K; van der Woude, Lucas H V

    2013-10-01

    The purpose of this study was to evaluate the effect of wheelchair mass, solid vs. pneumatic tires and tire pressure on physical strain and wheelchair propulsion technique. 11 Able-bodied participants performed 14 submaximal exercise blocks on a treadmill with a fixed speed (1.11 m/s) within 3 weeks to determine the effect of tire pressure (100%, 75%, 50%, 25% of the recommended value), wheelchair mass (0 kg, 5 kg, or 10 kg extra) and tire type (pneumatic vs. solid). All test conditions (except pneumatic vs. solid) were performed with and without instrumented measurement wheels. Outcome measures were power output (PO), physical strain (heart rate (HR), oxygen uptake (VO2), gross mechanical efficiency (ME)) and propulsion technique (timing, force application). At 25% tire pressure PO and subsequently VO2 were higher compared to 100% tire pressure. Furthermore, a higher tire pressure led to a longer cycle time and contact angle and subsequently lower push frequency. Extra mass did not lead to an increase in PO, physical strain or propulsion technique. Solid tires led to a higher PO and physical strain. The solid tire effect was amplified by increased mass (tire × mass interaction). In contrast to extra mass, tire pressure and tire type have an effect on PO, physical strain or propulsion technique of steady-state wheelchair propulsion. As expected, it is important to optimize tire pressure and tire type. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  13. Systematic review of behaviour change techniques to promote participation in physical activity among people with dementia.

    Science.gov (United States)

    Nyman, Samuel R; Adamczewska, Natalia; Howlett, Neil

    2018-02-01

    The objective of this study was to systematically review the evidence for the potential promise of behaviour change techniques (BCTs) to increase physical activity among people with dementia (PWD). PsychINFO, MEDLINE, CINAHL, and the Cochrane Central Register of Controlled Trials databases were searched 01/01/2000-01/12/2016. Randomized controlled/quasi-randomized trials were included if they recruited people diagnosed/suspected to have dementia, used at least one BCT in the intervention arm, and had at least one follow-up measure of physical activity/adherence. Studies were appraised using the Cochrane Collaboration Risk of Bias Tool, and BCTs were coded using Michie et al., 2013, Annals of Behavioral Medicine, 46, 81. taxonomy. Intervention findings were narratively synthesized as either 'very promising', 'quite promising', or 'non-promising', and BCTs were judged as having potential promise if they featured in at least twice as many very/quite promising than non-promising interventions (as per Gardner et al., 2016, Health Psychology Review, 10, 89). Nineteen articles from nine trials reported physical activity findings on behavioural outcomes (two very promising, one quite promising, and two non-promising) or intervention adherence (one quite promising and four non-promising). Thirteen BCTs were used across the interventions. While no BCT had potential promise to increase intervention adherence, three BCTs had potential promise for improving physical activity behaviour outcomes: goal setting (behaviour), social support (unspecified), and using a credible source. Three BCTs have potential promise for use in future interventions to increase physical activity among PWD. Statement of contribution What is already known on this subject? While physical activity is a key lifestyle factor to enhance and maintain health and wellbeing amongst the general population, adults rarely participate in sufficient levels to obtain these benefits. Systematic reviews suggest that

  14. Pre-Service Physics Teachers' Argumentation in a Model Rocketry Physics Experience

    Science.gov (United States)

    Gürel, Cem; Süzük, Erol

    2017-01-01

    This study investigates the quality of argumentation developed by a group of pre-service physics teachers' (PSPT) as an indicator of subject matter knowledge on model rocketry physics. The structure of arguments and scientific credibility model was used as a design framework in the study. The inquiry of model rocketry physics was employed in…

  15. Physical and Chemical Environmental Abstraction Model

    International Nuclear Information System (INIS)

    Nowak, E.

    2000-01-01

    As directed by a written development plan (CRWMS M and O 1999a), Task 1, an overall conceptualization of the physical and chemical environment (P/CE) in the emplacement drift is documented in this Analysis/Model Report (AMR). Included are the physical components of the engineered barrier system (EBS). The intended use of this descriptive conceptualization is to assist the Performance Assessment Department (PAD) in modeling the physical and chemical environment within a repository drift. It is also intended to assist PAD in providing a more integrated and complete in-drift geochemical model abstraction and to answer the key technical issues raised in the U.S. Nuclear Regulatory Commission (NRC) Issue Resolution Status Report (IRSR) for the Evolution of the Near-Field Environment (NFE) Revision 2 (NRC 1999). EBS-related features, events, and processes (FEPs) have been assembled and discussed in ''EBS FEPs/Degradation Modes Abstraction'' (CRWMS M and O 2000a). Reference AMRs listed in Section 6 address FEPs that have not been screened out. This conceptualization does not directly address those FEPs. Additional tasks described in the written development plan are recommended for future work in Section 7.3. To achieve the stated purpose, the scope of this document includes: (1) the role of in-drift physical and chemical environments in the Total System Performance Assessment (TSPA) (Section 6.1); (2) the configuration of engineered components (features) and critical locations in drifts (Sections 6.2.1 and 6.3, portions taken from EBS Radionuclide Transport Abstraction (CRWMS M and O 2000b)); (3) overview and critical locations of processes that can affect P/CE (Section 6.3); (4) couplings and relationships among features and processes in the drifts (Section 6.4); and (5) identities and uses of parameters transmitted to TSPA by some of the reference AMRs (Section 6.5). This AMR originally considered a design with backfill, and is now being updated (REV 00 ICN1) to address

  16. Relativistic nuclear physics with the spectator model

    International Nuclear Information System (INIS)

    Gross, F.

    1988-01-01

    The spectator model, a general approach to the relativistic treatment of nuclear physics problems in which spectators to nuclear interactions are put on their mass-shell, will be defined nd described. The approach grows out of the relativistic treatment of two and three body systems in which one particle is off-shell, and recent numerical results for the NN interaction will be presented. Two meson-exchange models, one with only 4 mesons (π, σ, /rho/, ω) but with a 25% admixture of γ 5 coupling for the pion, and a second with 6 mesons (π, σ, /rho/, ω, δ, and /eta/) but a pure γ 5 γ/sup mu/ pion coupling, are shown to give very good quantitative fits to NN scattering phase shifts below 400 MeV, and also a good description of the /rho/ 40 Cα elastic scattering observables. 19 refs., 6 figs., 1 tab

  17. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  18. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  19. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  20. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  1. Physical explanation of the SLIPI technique by the large scatterer approximation of the RTE

    International Nuclear Information System (INIS)

    Kristensson, Elias; Kristensson, Gerhard

    2017-01-01

    Visualizing the interior of a turbid scattering media by means light-based methods is not a straightforward task because of multiple light scattering, which generates image blur. To overcome this issue, a technique called Structured Laser Illumination Planar Imaging (SLIPI) was developed within the field of spray imaging. The method is based on a ‘light coding’ strategy to distinguish between directly and multiply scattered light, allowing the intensity from the latter to be suppressed by means of data post-processing. Recently, the performance of the SLIPI technique was investigated, during which deviations from theoretical predictions were discovered. In this paper, we aim to explain the origin of these deviations, and to achieve this end, we have performed several SLIPI measurements under well-controlled conditions. Our experimental results are compared with a theoretical model that is based on the large scatterer approximation of the Radiative Transfer Equation but modified according to certain constraints. Specifically, our model is designed to (1) ignore all off-axis intensity contributions, (2) to treat unperturbed- and forward-scattered light equally and (3) to accept light to scatter within a narrow forward-cone as we believe these are the rules governing the SLIPI technique. The comparison conclusively shows that optical measurements based on scattering and/or attenuation in turbid media can be subject to significant errors if not all aspects of light-matter interactions are considered. Our results indicate, as were expected, that forward-scattering can lead to deviations between experiments and theoretical predictions, especially when probing relatively large particles. Yet, the model also suggests that the spatial frequency of the superimposed ‘light code’ as well as the spreading of the light-probe are important factors one also needs to consider. The observed deviations from theoretical predictions could, however, potentially be exploited to

  2. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  3. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    cO Indian Academy of Sciences. Vol. ... This is the report of flavor physics and model building working group at ... those in model building have been primarily devoted to neutrino physics. ..... [12] Andrei Gritsan, ICHEP 2004, Beijing, China.

  4. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  5. Fuzzy modelling of Atlantic salmon physical habitat

    Science.gov (United States)

    St-Hilaire, André; Mocq, Julien; Cunjak, Richard

    2015-04-01

    Fish habitat models typically attempt to quantify the amount of available river habitat for a given fish species for various flow and hydraulic conditions. To achieve this, information on the preferred range of values of key physical habitat variables (e.g. water level, velocity, substrate diameter) for the targeted fishs pecies need to be modelled. In this context, we developed several habitat suitability indices sets for three Atlantic salmon life stages (young-of-the-year (YOY), parr, spawning adults) with the help of fuzzy logic modeling. Using the knowledge of twenty-seven experts, from both sides of the Atlantic Ocean, we defined fuzzy sets of four variables (depth, substrate size, velocity and Habitat Suitability Index, or HSI) and associated fuzzy rules. When applied to the Romaine River (Canada), median curves of standardized Weighted Usable Area (WUA) were calculated and a confidence interval was obtained by bootstrap resampling. Despite the large range of WUA covered by the expert WUA curves, confidence intervals were relatively narrow: an average width of 0.095 (on a scale of 0 to 1) for spawning habitat, 0.155 for parr rearing habitat and 0.160 for YOY rearing habitat. When considering an environmental flow value corresponding to 90% of the maximum reached by WUA curve, results seem acceptable for the Romaine River. Generally, this proposed fuzzy logic method seems suitable to model habitat availability for the three life stages, while also providing an estimate of uncertainty in salmon preferences.

  6. Effect of wheelchair mass, tire type and tire pressure on physical strain and wheelchair propulsion technique

    NARCIS (Netherlands)

    de Groot, Sonja; Vegter, Riemer J. K.; van der Woude, Lucas H. V.

    2013-01-01

    The purpose of this study was to evaluate the effect of wheelchair mass, solid vs. pneumatic tires and tire pressure on physical strain and wheelchair propulsion technique. 11 Able-bodied participants performed 14 submaximal exercise blocks on a treadmill with a fixed speed (1.11 m/s) within 3 weeks

  7. Lifetimes of organic photovoltaics: Combining chemical and physical characterisation techniques to study degradation mechanisms

    DEFF Research Database (Denmark)

    Norrman, K.; Larsen, N.B.; Krebs, Frederik C

    2006-01-01

    Degradation mechanisms of a photovoltaic device with an Al/C-60/C-12-PSV/PEDOT:PSS/ITO/glass geometry was studied using a combination of in-plane physical and chemical analysis techniques: TOF-SIMS, AFM, SEM, interference microscopy and fluorescence microscopy. A comparison was made between...

  8. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  9. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  10. Graphene growth process modeling: a physical-statistical approach

    Science.gov (United States)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  11. Development of Wireless Techniques in Data and Power Transmission - Application for Particle Physics Detectors

    CERN Document Server

    Locci, E.; Dehos, C.; De Lurgio, P.; Djurcic, Z.; Drake, G.; Gimenez, J. L. Gonzalez; Gustafsson, L.; Kim, D.W.; Roehrich, D.; Schoening, A.; Siligaris, A.; Soltveit, H.K.; Ullaland, K.; Vincent, P.; Wiednert, D.; Yang, S.; Brenner, R.

    2015-01-01

    Wireless techniques have developed extremely fast over the last decade and using them for data and power transmission in particle physics detectors is not science- fiction any more. During the last years several research groups have independently thought of making it a reality. Wireless techniques became a mature field for research and new developments might have impact on future particle physics experiments. The Instrumentation Frontier was set up as a part of the SnowMass 2013 Community Summer Study [1] to examine the instrumentation R&D for the particle physics research over the coming decades: {\\guillemotleft} To succeed we need to make technical and scientific innovation a priority in the field {\\guillemotright}. Wireless data transmission was identified as one of the innovations that could revolutionize the transmission of data out of the detector. Power delivery was another challenge mentioned in the same report. We propose a collaboration to identify the specific needs of different projects that m...

  12. Application of physical separation techniques for waste utilization and management - case studies from Indian uranium deposits

    International Nuclear Information System (INIS)

    Anand Rao, K.; Sreenivas, T.

    2013-01-01

    The importance of physical beneficiation techniques in metallurgical industry showed gradual decline due to decreasing ore grades and very-fine size dissemination of valuable minerals in the host matrix. However, this technology regained prominence in recent past due to their utility in resource recycle, waste utilization, waste treatment and environmental remediation. Hybrid processes combined with physical, chemical and biological technology is now developing such that the idea of sustainable development is implemented. The uranium ore processing industry has always been under intense public scanner for some of the apprehensions, chiefly radioactivity, inspite of its immense energy delivering potential. Besides this, the chemical compounds formed due to gangue mineral reactivity and their carry-over to tailings pond added further owes. However, conscious scientific efforts are being made to contain these hazards to permissible levels by application of various remedial methods of which the physical separation techniques too are quite prominent

  13. Model unspecific search for new physics in pp collisions

    International Nuclear Information System (INIS)

    Malhotra, Shivali

    2013-01-01

    The model-independent analysis systematically scans the data taken by Compact Muon Solenoid - CMS detector for deviations from the Standard Model (SM) predictions. This approach is sensitive to a variety of models for new physics due to the minimal theoretical bias i.e. without assumptions on specific models of new physics and covering a large phase space. Possible causes of the significant deviations could be insufficient understanding of the collision event generation or detector simulation, or indeed genuine new physics in the data. Thus the output of MUSiC must be seen as only the first, but important step in the potential discovery of new physics. To get the distinctive final states, events with at least one electron or muon are classified according to their content of reconstructed objects (muons, electrons, photons, jets and missing transverse energy) and sorted into event classes. A broad scan of three kinematic distributions (scalar sum of the transverse momentum, invariant mass of reconstructed objects and missing transverse energy) in those event classes is performed by identifying deviations from SM expectations, accounting for systematic uncertainties. A scanning algorithm determines the regions in the considered distributions where the measured data deviates most from the SM predictions. This search is sensitive to an excess as well as a deficit in the comparison of data and SM background. This approach has been applied to the CMS data and we have obtained the preliminary results. I will talk about the details of the analysis techniques, its implementation in analyzing CMS data, results obtained and the discussion on the discrepancy observed

  14. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    Science.gov (United States)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  15. A Holoinformational Model of the Physical Observer

    Science.gov (United States)

    di Biase, Francisco

    2013-09-01

    The author proposes a holoinformational view of the observer based, on the holonomic theory of brain/mind function and quantum brain dynamics developed by Karl Pribram, Sir John Eccles, R.L. Amoroso, Hameroff, Jibu and Yasue, and in the quantumholographic and holomovement theory of David Bohm. This conceptual framework is integrated with nonlocal information properties of the Quantum Field Theory of Umesawa, with the concept of negentropy, order, and organization developed by Shannon, Wiener, Szilard and Brillouin, and to the theories of self-organization and complexity of Prigogine, Atlan, Jantsch and Kauffman. Wheeler's "it from bit" concept of a participatory universe, and the developments of the physics of information made by Zureck and others with the concepts of statistical entropy and algorithmic entropy, related to the number of bits being processed in the mind of the observer are also considered. This new synthesis gives a self-organizing quantum nonlocal informational basis for a new model of awareness in a participatory universe. In this synthesis, awareness is conceived as meaningful quantum nonlocal information interconnecting the brain and the cosmos, by a holoinformational unified field (integrating nonlocal holistic (quantum) and local (Newtonian). We propose that the cosmology of the physical observer is this unified nonlocal quantum-holographic cosmos manifesting itself through awareness, interconnected in a participatory holistic and indivisible way the human mind-brain to all levels of the self-organizing holographic anthropic multiverse.

  16. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  17. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  18. Mathematical models of physics problems (physics research and technology)

    CERN Document Server

    Anchordoqui, Luis Alfredo

    2013-01-01

    This textbook is intended to provide a foundation for a one-semester introductory course on the advanced mathematical methods that form the cornerstones of the hard sciences and engineering. The work is suitable for first year graduate or advanced undergraduate students in the fields of Physics, Astronomy and Engineering. This text therefore employs a condensed narrative sufficient to prepare graduate and advanced undergraduate students for the level of mathematics expected in more advanced graduate physics courses, without too much exposition on related but non-essential material. In contrast to the two semesters traditionally devoted to mathematical methods for physicists, the material in this book has been quite distilled, making it a suitable guide for a one-semester course. The assumption is that the student, once versed in the fundamentals, can master more esoteric aspects of these topics on his or her own if and when the need arises during the course of conducting research. The book focuses on two cor...

  19. Differences in spatial understanding between physical and virtual models

    Directory of Open Access Journals (Sweden)

    Lei Sun

    2014-03-01

    Full Text Available In the digital age, physical models are still used as major tools in architectural and urban design processes. The reason why designers still use physical models remains unclear. In addition, physical and 3D virtual models have yet to be differentiated. The answers to these questions are too complex to account for in all aspects. Thus, this study only focuses on the differences in spatial understanding between physical and virtual models. In particular, it emphasizes on the perception of scale. For our experiment, respondents were shown a physical model and a virtual model consecutively. A questionnaire was then used to ask the respondents to evaluate these models objectively and to establish which model was more accurate in conveying object size. Compared with the virtual model, the physical model tended to enable quicker and more accurate comparisons of building heights.

  20. Activities and trends in physical protection modeling with microcomputers

    International Nuclear Information System (INIS)

    Chapman, L.D.; Harlan, C.P.

    1985-01-01

    Sandia National Laboratories developed several models in the mid to late 1970's including the Safeguards Automated Facility Evaluation (SAFE) method. The Estimate of Adversary Sequence Interruption (EASI), the Safeguards Network Analysis Procedure (SNAP), the Brief Adversary Threat Loss Estimator (BATLE), and others. These models were implemented on large computers such as the VAX 11/780 and the CDC machines. With the recent development and widespread use of the IBM PC and other microcomputers, it has become evident that several physical protection models should be made available for use on these microcomputers. Currently, there are programs under way to convert the EASI, SNAP and BATLE models to the IBM PC. The input and analysis using the EASI model has been designed to be very user friendly through the utilization of menu driven options. The SNAP modeling technique will be converted to an IBM PC/AT with many enhancements to user friendliness. Graphical assistance for entering the model and reviewing traces of the simulated output are planned. The BATLE model is being converted to the IBM PC while preserving its interactive nature. The current status of the these developments is reported in this paper

  1. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  2. Advanced Ground Systems Maintenance Physics Models For Diagnostics Project

    Science.gov (United States)

    Perotti, Jose M.

    2015-01-01

    The project will use high-fidelity physics models and simulations to simulate real-time operations of cryogenic and systems and calculate the status/health of the systems. The project enables the delivery of system health advisories to ground system operators. The capability will also be used to conduct planning and analysis of cryogenic system operations. This project will develop and implement high-fidelity physics-based modeling techniques tosimulate the real-time operation of cryogenics and other fluids systems and, when compared to thereal-time operation of the actual systems, provide assessment of their state. Physics-modelcalculated measurements (called “pseudo-sensors”) will be compared to the system real-timedata. Comparison results will be utilized to provide systems operators with enhanced monitoring ofsystems' health and status, identify off-nominal trends and diagnose system/component failures.This capability can also be used to conduct planning and analysis of cryogenics and other fluidsystems designs. This capability will be interfaced with the ground operations command andcontrol system as a part of the Advanced Ground Systems Maintenance (AGSM) project to helpassure system availability and mission success. The initial capability will be developed for theLiquid Oxygen (LO2) ground loading systems.

  3. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  4. Point-of-care cardiac ultrasound techniques in the physical examination: better at the bedside.

    Science.gov (United States)

    Kimura, Bruce J

    2017-07-01

    The development of hand-carried, battery-powered ultrasound devices has created a new practice in ultrasound diagnostic imaging, called 'point-of-care' ultrasound (POCUS). Capitalising on device portability, POCUS is marked by brief and limited ultrasound imaging performed by the physician at the bedside to increase diagnostic accuracy and expediency. The natural evolution of POCUS techniques in general medicine, particularly with pocket-sized devices, may be in the development of a basic ultrasound examination similar to the use of the binaural stethoscope. This paper will specifically review how POCUS improves the limited sensitivity of the current practice of traditional cardiac physical examination by both cardiologists and non-cardiologists. Signs of left ventricular systolic dysfunction, left atrial enlargement, lung congestion and elevated central venous pressures are often missed by physical techniques but can be easily detected by POCUS and have prognostic and treatment implications. Creating a general set of repetitive imaging skills for these entities for application on all patients during routine examination will standardise and reduce heterogeneity in cardiac bedside ultrasound applications, simplify teaching curricula, enhance learning and recollection, and unify competency thresholds and practice. The addition of POCUS to standard physical examination techniques in cardiovascular medicine will result in an ultrasound-augmented cardiac physical examination that reaffirms the value of bedside diagnosis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  6. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  7. Physical and data-link security techniques for future communication systems

    CERN Document Server

    Tomasin, Stefano

    2016-01-01

     This book focuses on techniques that can be applied at the physical and data-link layers of communication systems in order to secure transmissions against eavesdroppers. Topics ranging from information theory-based security to coding for security and cryptography are discussed, with presentation of cutting-edge research and innovative results from leading researchers. The characteristic feature of all the contributions is their relevance for practical embodiments: detailed consideration is given to applications of security principles to a variety of widely used communication techniques such as multiantenna systems, ultra-wide band communication systems, power line communications, and quantum key distribution techniques. A further distinctive aspect is the attention paid to both unconditional and computational security techniques, providing a bridge between two usually distinct worlds. The book comprises extended versions of contributions delivered at the Workshop on Communication Security, held in Ancona, I...

  8. Comparing Physical Examination With Sonographic Versions of the Same Examination Techniques for Splenomegaly.

    Science.gov (United States)

    Cessford, Tara; Meneilly, Graydon S; Arishenkoff, Shane; Eddy, Christopher; Chen, Luke Y C; Kim, Daniel J; Ma, Irene W Y

    2017-12-08

    To determine whether sonographic versions of physical examination techniques can accurately identify splenomegaly, Castell's method (Ann Intern Med 1967; 67:1265-1267), the sonographic Castell's method, spleen tip palpation, and the sonographic spleen tip technique were compared with reference measurements. Two clinicians trained in bedside sonography patients recruited from an urban hematology clinic. Each patient was examined for splenomegaly using conventional percussion and palpation techniques (Castell's method and spleen tip palpation, respectively), as well as the sonographic versions of these maneuvers (sonographic Castell's method and sonographic spleen tip technique). Results were compared with a reference standard based on professional sonographer measurements. The sonographic Castell's method had greater sensitivity (91.7% [95% confidence interval, 61.5% to 99.8%]) than the traditional Castell's method (83.3% [95% confidence interval, 51.6% to 97.9%]) but took longer to perform [mean ± SD, 28.8 ± 18.6 versus 18.8 ± 8.1 seconds; P = .01). Palpable and positive sonographic spleen tip results were both 100% specific, but the sonographic spleen tip method was more sensitive (58.3% [95% confidence interval, 27.7% to 84.8%] versus 33.3% [95% confidence interval, 9.9% to 65.1%]). Sonographic versions of traditional physical examination maneuvers have greater diagnostic accuracy than the physical examination maneuvers from which they are derived but may take longer to perform. We recommend a combination of traditional physical examination and sonographic techniques when evaluating for splenomegaly at the bedside. © 2017 by the American Institute of Ultrasound in Medicine.

  9. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  10. Chemico-physical models of cometary atmospheres

    International Nuclear Information System (INIS)

    Huebner, W.F.; Keady, J.J.; Boice, D.C.; Schmidt, H.U.; Wegmann, R.

    1985-01-01

    Sublimation (vaporization) of the icy component of a cometary nucleus determines the initial composition of the coma gas as it streams outward and escapes. Photolytic reactions in the inner coma, escape of fast, light species such as atomic and molecular hydrogen, and solar wind interaction in the outer coma alter the chemical composition and the physical nature of the coma gas. Models that describe these interactions must include (1) chemical kinetics, (2) coma energy balance, (3) multifluid flow for the rapidly escaping light components, the heavier bulk fluid, and the plasma with separate temperatures for electrons and the remainder of the gas, (4) transition from a collision dominated inner region to free molecular flow of neutrals in the outer region, (5) pickup of cometary ions by the solar wind, (6) counter and cross streaming of neutrals with respect to the plasma which outside of the contact surface also contains solar wind ions, and (7) magnetic fields carried by the solar wind. Progress on such models is described and results including velocity, temperature, and number density profiles for important chemical species are presented and compared with observations

  11. Physical model of optical inhomogeneities of water

    Science.gov (United States)

    Shybanov, E. B.

    2017-11-01

    The paper is devoted to theoretical aspects of the light scattering of water that does not contain suspended particles. To be consistent with current physical point of view the water as far as any liquid is regarded as a complex unstable nonergodic media. It was proposed that at fixed time the water as a condensed medium had global inhomogeneities similar to linear and planar defects in a solid. Anticipated own global inhomogeneities of water have been approximated by the system randomly distributed spherical clusters filling the entire water bulk. An analytical expression for the single scattered light has been derived. The formula simultaneously describes both the high anisotropy of light scattering and the high degree of polarization which one close to those for molecular scattering. It is shown that at general angles there is a qualitative coincidence with the two-component Kopelevich's model for the light scattering by marine particles. On the contrary towards to forwards angles the spectral law becomes much more prominent i.e. it corresponds to results for model of optically soft particles.

  12. Evaluation of an advanced physical diagnosis course using consumer preferences methods: the nominal group technique.

    Science.gov (United States)

    Coker, Joshua; Castiglioni, Analia; Kraemer, Ryan R; Massie, F Stanford; Morris, Jason L; Rodriguez, Martin; Russell, Stephen W; Shaneyfelt, Terrance; Willett, Lisa L; Estrada, Carlos A

    2014-03-01

    Current evaluation tools of medical school courses are limited by the scope of questions asked and may not fully engage the student to think on areas to improve. The authors sought to explore whether a technique to study consumer preferences would elicit specific and prioritized information for course evaluation from medical students. Using the nominal group technique (4 sessions), 12 senior medical students prioritized and weighed expectations and topics learned in a 100-hour advanced physical diagnosis course (4-week course; February 2012). Students weighted their top 3 responses (top = 3, middle = 2 and bottom = 1). Before the course, 12 students identified 23 topics they expected to learn; the top 3 were review sensitivity/specificity and high-yield techniques (percentage of total weight, 18.5%), improving diagnosis (13.8%) and reinforce usual and less well-known techniques (13.8%). After the course, students generated 22 topics learned; the top 3 were practice and reinforce advanced maneuvers (25.4%), gaining confidence (22.5%) and learn the evidence (16.9%). The authors observed no differences in the priority of responses before and after the course (P = 0.07). In a physical diagnosis course, medical students elicited specific and prioritized information using the nominal group technique. The course met student expectations regarding education of the evidence-based physical examination, building skills and confidence on the proper techniques and maneuvers and experiential learning. The novel use for curriculum evaluation may be used to evaluate other courses-especially comprehensive and multicomponent courses.

  13. A Structural Equation Model of Expertise in College Physics

    Science.gov (United States)

    Taasoobshirazi, Gita; Carr, Martha

    2009-01-01

    A model of expertise in physics was tested on a sample of 374 college students in 2 different level physics courses. Structural equation modeling was used to test hypothesized relationships among variables linked to expert performance in physics including strategy use, pictorial representation, categorization skills, and motivation, and these…

  14. A Structural Equation Model of Conceptual Change in Physics

    Science.gov (United States)

    Taasoobshirazi, Gita; Sinatra, Gale M.

    2011-01-01

    A model of conceptual change in physics was tested on introductory-level, college physics students. Structural equation modeling was used to test hypothesized relationships among variables linked to conceptual change in physics including an approach goal orientation, need for cognition, motivation, and course grade. Conceptual change in physics…

  15. Physical modelling and testing in environmental geotechnics

    International Nuclear Information System (INIS)

    Garnier, J.; Thorel, L.; Haza, E.

    2000-01-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper is analysed in INIS data base for its specific interest in nuclear industry. The other ones, concerning the energy, are analyzed in ETDE data base

  16. Physical modelling and testing in environmental geotechnics

    Energy Technology Data Exchange (ETDEWEB)

    Garnier, J.; Thorel, L.; Haza, E. [Laboratoire Central des Ponts et Chaussees a Nantes, 44 - Nantes (France)

    2000-07-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper has been analyzed in INIS data base for its specific interest in nuclear industry.

  17. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  18. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  19. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  20. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  1. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  2. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  3. Steam generators clogging diagnosis through physical and statistical modelling

    International Nuclear Information System (INIS)

    Girard, S.

    2012-01-01

    Steam generators are massive heat exchangers feeding the turbines of pressurised water nuclear power plants. Internal parts of steam generators foul up with iron oxides which gradually close some holes aimed for the passing of the fluid. This phenomenon called clogging causes safety issues and means to assess it are needed to optimise the maintenance strategy. The approach investigated in this thesis is the analysis of steam generators dynamic behaviour during power transients with a mono dimensional physical model. Two improvements to the model have been implemented. One was taking into account flows orthogonal to the modelling axis, the other was introducing a slip between phases accounting for velocity difference between liquid water and steam. These two elements increased the model's degrees of freedom and improved the adequacy of the simulation to plant data. A new calibration and validation methodology has been proposed to assess the robustness of the model. The initial inverse problem was ill posed: different clogging spatial configurations can produce identical responses. The relative importance of clogging, depending on its localisation, has been estimated by sensitivity analysis with the Sobol' method. The dimension of the model functional output had been previously reduced by principal components analysis. Finally, the input dimension has been reduced by a technique called sliced inverse regression. Based on this new framework, a new diagnosis methodology, more robust and better understood than the existing one, has been proposed. (author)

  4. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  5. Laparoscopic cholecystectomy poses physical injury risk to surgeons: analysis of hand technique and standing position.

    Science.gov (United States)

    Youssef, Yassar; Lee, Gyusung; Godinez, Carlos; Sutton, Erica; Klein, Rosemary V; George, Ivan M; Seagull, F Jacob; Park, Adrian

    2011-07-01

    This study compares surgical techniques and surgeon's standing position during laparoscopic cholecystectomy (LC), investigating each with respect to surgeons' learning, performance, and ergonomics. Little homogeneity exists in LC performance and training. Variations in standing position (side-standing technique vs. between-standing technique) and hand technique (one-handed vs. two-handed) exist. Thirty-two LC procedures performed on a virtual reality simulator were video-recorded and analyzed. Each subject performed four different procedures: one-handed/side-standing, one-handed/between-standing, two-handed/side-standing, and two-handed/between-standing. Physical ergonomics were evaluated using Rapid Upper Limb Assessment (RULA). Mental workload assessment was acquired with the National Aeronautics and Space Administration-Task Load Index (NASA-TLX). Virtual reality (VR) simulator-generated performance evaluation and a subjective survey were analyzed. RULA scores were consistently lower (indicating better ergonomics) for the between-standing technique and higher (indicating worse ergonomics) for the side-standing technique, regardless of whether one- or two-handed. Anatomical scores overall showed side-standing to have a detrimental effect on the upper arms and trunk. The NASA-TLX showed significant association between the side-standing position and high physical demand, effort, and frustration (p<0.05). The two-handed technique in the side-standing position required more effort than the one-handed (p<0.05). No difference in operative time or complication rate was demonstrated among the four procedures. The two-handed/between-standing method was chosen as the best procedure to teach and standardize. Laparoscopic cholecystectomy poses a risk of physical injury to the surgeon. As LC is currently commonly performed in the United States, the left side-standing position may lead to increased physical demand and effort, resulting in ergonomically unsound conditions for

  6. Do physical activity and dietary smartphone applications incorporate evidence-based behaviour change techniques?

    Science.gov (United States)

    Direito, Artur; Dale, Leila Pfaeffli; Shields, Emma; Dobson, Rosie; Whittaker, Robyn; Maddison, Ralph

    2014-06-25

    There has been a recent proliferation in the development of smartphone applications (apps) aimed at modifying various health behaviours. While interventions that incorporate behaviour change techniques (BCTs) have been associated with greater effectiveness, it is not clear to what extent smartphone apps incorporate such techniques. The purpose of this study was to investigate the presence of BCTs in physical activity and dietary apps and determine how reliably the taxonomy checklist can be used to identify BCTs in smartphone apps. The top-20 paid and top-20 free physical activity and/or dietary behaviour apps from the New Zealand Apple App Store Health & Fitness category were downloaded to an iPhone. Four independent raters user-tested and coded each app for the presence/absence of BCTs using the taxonomy of behaviour change techniques (26 BCTs in total). The number of BCTs included in the 40 apps was calculated. Krippendorff's alpha was used to evaluate interrater reliability for each of the 26 BCTs. Apps included an average of 8.1 (range 2-18) techniques, the number being slightly higher for paid (M = 9.7, range 2-18) than free apps (M = 6.6, range 3-14). The most frequently included BCTs were "provide instruction" (83% of the apps), "set graded tasks" (70%), and "prompt self-monitoring" (60%). Techniques such as "teach to use prompts/cues", "agree on behavioural contract", "relapse prevention" and "time management" were not present in the apps reviewed. Interrater reliability coefficients ranged from 0.1 to 0.9 (Mean 0.6, SD = 0.2). Presence of BCTs varied by app type and price; however, BCTs associated with increased intervention effectiveness were in general more common in paid apps. The taxonomy checklist can be used by independent raters to reliably identify BCTs in physical activity and dietary behaviour smartphone apps.

  7. Four discourse models of physics teacher education

    OpenAIRE

    Larsson, Johanna; Airey, John

    2017-01-01

    In Sweden, as in many other countries, the education of high-school physics teachers is typically carried out in three different environments; the education department, the physics department and school itself during teaching practice. Trainee physics teachers are in the process of building their professional identity as they move between these three environments. Although much has been written about teacher professional identity (see overview in Beijaard, Meijer, & Verloop, 2004) little ...

  8. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  9. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  11. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  12. DEVELOPMENT OF WIRELESS TECHNIQUES IN DATA AND POWER TRANSMISSION APPLICATION FOR PARTICLE-PHYSICS DETECTORS

    CERN Document Server

    Brenner, R; Dehos, C; De Lurgio, P; Djurcic, Z; Drake, G; Gonzales Gimenez, JL; Gustafsson, L; Kim, DW; Locci, E; Pfeiffer, U; Röhrich, D; Rydberg, D; Schöning, A; Siligaris, A; Soltveit, HK; Ullaland, K; Vincent, P; Vasquez, PR; Wiedner, D; Yang, S

    2017-01-01

    In the WADAPT project described in this Letter of Intent, we propose to develop wireless techniques for data and power transmission in particle-physics detectors. Wireless techniques have developed extremely fast over the last decade and are now mature for being considered as a promising alternative to cables and optical links that would revolutionize the detector design. The WADAPT consortium has been formed to identify the specific needs of different projects that might benefit from wireless techniques with the objective of providing a common platform for research and development in order to optimize effectiveness and cost. The proposed R&D will aim at designing and testing wireless demonstrators for large instrumentation systems.

  13. CT digital radiography: Alternative technique for airway evaluation in physically disabled patients

    International Nuclear Information System (INIS)

    Mandell, G.A.; Harcke, H.T.; Brunson, G.; Delengowski, R.; Padman, R.

    1987-01-01

    Evaluation of the airway for the presence of granulation tissue prior to removal of a tracheostomy is essential to prevent sudden respiratory decompensation secondary to obstruction. Airway examination in a brain and/or spinal cord injured patient is especially difficult under fluoroscopy. The patient's lack of mobility results in poor visualization of the trachea, secondary to the overlying dense osseous components of the shoulders and thoracic cage. A CT localization view (digital view), which allows manipulation and magnification of the digital data in order to see the hidden airway and detect associated obstructing lesions, is proffered as an alternative technique to high KV, magnification technique. Thirteen examinations were performed satisfactorily in eleven patients examined by this technique with little expenditure of time, physical exertion, and irradiation. The sensitivity, specificity and accuracy of digital airway examination were 100%, 67% and 92% respectively with bronchoscopy used as the standard. (orig.)

  14. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    Science.gov (United States)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  15. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  16. Teaching Einsteinian Physics at Schools: Part 2, Models and Analogies for Quantum Physics

    Science.gov (United States)

    Kaur, Tejinder; Blair, David; Moschilla, John; Zadnik, Marjan

    2017-01-01

    The Einstein-First project approaches the teaching of Einsteinian physics through the use of physical models and analogies. This paper presents an approach to the teaching of quantum physics which begins by emphasising the particle-nature of light through the use of toy projectiles to represent photons. This allows key concepts including the…

  17. Engaging Students In Modeling Instruction for Introductory Physics

    Science.gov (United States)

    Brewe, Eric

    2016-05-01

    Teaching introductory physics is arguably one of the most important things that a physics department does. It is the primary way that students from other science disciplines engage with physics and it is the introduction to physics for majors. Modeling instruction is an active learning strategy for introductory physics built on the premise that science proceeds through the iterative process of model construction, development, deployment, and revision. We describe the role that participating in authentic modeling has in learning and then explore how students engage in this process in the classroom. In this presentation, we provide a theoretical background on models and modeling and describe how these theoretical elements are enacted in the introductory university physics classroom. We provide both quantitative and video data to link the development of a conceptual model to the design of the learning environment and to student outcomes. This work is supported in part by DUE #1140706.

  18. Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2008-06-01

    Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.

  19. Searches for Beyond Standard Model Physics with ATLAS and CMS

    CERN Document Server

    Rompotis, Nikolaos; The ATLAS collaboration

    2017-01-01

    The exploration of the high energy frontier with ATLAS and CMS experiments provides one of the best opportunities to look for physics beyond the Standard Model. In this talk, I review the motivation, the strategy and some recent results related to beyond Standard Model physics from these experiments. The review will cover beyond Standard Model Higgs boson searches, supersymmetry and searches for exotic particles.

  20. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  1. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    Science.gov (United States)

    2017-09-19

    NUWC-NPT Technical Report 12,236 19 September 2017 Structural Acoustic Physics -Based Modeling of Curved Composite Shells Rachel E. Hesse...SUBTITLE Structural Acoustic Physics -Based Modeling of Curved Composite Shells 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...study was to use physics -based modeling (PBM) to investigate wave propagations through curved shells that are subjected to acoustic excitation. An

  2. Nuclear measurements, techniques and instrumentation, industrial applications, plasma physics and nuclear fusion 1986-1996. International Atomic Energy Agency publications

    International Nuclear Information System (INIS)

    1997-03-01

    This catalogue lists all sales publications of the International Atomic Energy Agency dealing with Nuclear Measurements, Techniques, and Instrumentation, Industrial Applications, Plasma Physics and Nuclear Fusion, issued during the period 1986-1996. Most publications are in English. Proceedings of conferences, symposia and panels of experts may contain some papers in languages other than English (French, Russian or Spanish), but all of these papers have abstracts in English. Contents cover the three main areas of (i) Nuclear Measurements, Techniques and Instrumentation (Physics, Dosimetry Techniques, Nuclear Analytical Techniques, Research Reactor and Particle Accelerator Applications, and Nuclear Data), (ii) Industrial Applications (Radiation Processing, Radiometry, and Tracers), and (iii) Plasma Physics and Controlled Thermonuclear Fusion

  3. Implementation of behavior change techniques in mobile applications for physical activity.

    Science.gov (United States)

    Yang, Chih-Hsiang; Maher, Jaclyn P; Conroy, David E

    2015-04-01

    Mobile applications (apps) for physical activity are popular and hold promise for promoting behavior change and reducing non-communicable disease risk. App marketing materials describe a limited number of behavior change techniques (BCTs), but apps may include unmarketed BCTs, which are important as well. To characterize the extent to which BCTs have been implemented in apps from a systematic user inspection of apps. Top-ranked physical activity apps (N=100) were identified in November 2013 and analyzed in 2014. BCTs were coded using a contemporary taxonomy following a user inspection of apps. Users identified an average of 6.6 BCTs per app and most BCTs in the taxonomy were not represented in any apps. The most common BCTs involved providing social support, information about others' approval, instructions on how to perform a behavior, demonstrations of the behavior, and feedback on the behavior. A latent class analysis of BCT configurations revealed that apps focused on providing support and feedback as well as support and education. Contemporary physical activity apps have implemented a limited number of BCTs and have favored BCTs with a modest evidence base over others with more established evidence of efficacy (e.g., social media integration for providing social support versus active self-monitoring by users). Social support is a ubiquitous feature of contemporary physical activity apps and differences between apps lie primarily in whether the limited BCTs provide education or feedback about physical activity. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  4. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    Science.gov (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  5. A Survey On Physical Methods For Deformation Modeling

    Directory of Open Access Journals (Sweden)

    Huda Basloom

    2015-08-01

    Full Text Available Much effort has been dedicated to achieving realism in the simulation of deformable objects such as cloth hair rubber sea water smoke and human soft tissue in surgical simulation. However the deformable object in these simulations will exhibit physically correct behaviors true to the behavior of real objects when any force is applied to it and sometimes this requires real-time simulation. No matter how complex the geometry is real-time simulation is still required in some applications. Surgery simulation is an example of the need for real-time simulation. This situation has attracted the attention of a wide community of researchers such as computer scientists mechanical engineers biomechanics and computational geometers. This paper presents a review of the existing techniques for modeling deformable objects which have been developed within the last three decades for different computer graphics interactive applications.

  6. Physics-Based Pneumatic Hammer Instability Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Florida Turbine Technologies (FTT) proposes to conduct research necessary to develop a physics-based pneumatic hammer instability model for hydrostatic bearings...

  7. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  8. Flavor physics and right-handed models

    Energy Technology Data Exchange (ETDEWEB)

    Shafaq, Saba

    2010-08-20

    The Standard Model of particle physics only provides a parametrization of flavor which involves the values of the quark and lepton masses and unitary flavor mixing matrix i.e. CKM (Cabibbo-Kobayashi-Masakawa) matrix for quarks. The precise determination of elements of the CKM matrix is important for the study of the flavor sector of quarks. Here we concentrate on the matrix element vertical stroke V{sub cb} vertical stroke. In particular we consider the effects on the value of vertical stroke V{sub cb} vertical stroke from possible right-handed admixtures along with the usually left-handed weak currents. Left Right Symmetric Model provide a natural basis for right-handed current contributions and has been studied extensively in the literature but has never been discussed including flavor. In the first part of the present work an additional flavor symmetry is included in LRSM which allows a systematic study of flavor effects. The second part deals with the practical extraction of a possible right-handed contribution. Starting from the quark level transition b{yields}c we use heavy quark symmetries to relate the helicities of the quarks to experimentally accessible quantities. To this end we study the decays anti B{yields}D(D{sup *})l anti {nu} which have been extensively explored close to non recoil point. By taking into account SCET (Soft Collinear Effective Theory) formalism it has been extended to a maximum recoil point i.e. {upsilon} . {upsilon}{sup '} >>1. We derive a factorization formula, where the set of form factors is reduced to a single universal form factor {xi}({upsilon} . {upsilon}{sup '}) up to hard-scattering corrections. Symmetry relations on form factors for exclusive anti B {yields} D(D{sup *})l anti {nu} transition has been derived in terms of {xi}({upsilon} . {upsilon}{sup '}). These symmetries are then broken by perturbative effects. The perturbative corrections to symmetry-breaking corrections to first order in the strong

  9. Annual Report on Scientific Activities in 1997 of Department of Physics and Nuclear Techniques, Academy of Mining and Metallurgy, Cracow

    International Nuclear Information System (INIS)

    Wolny, J.; Olszynska, E.

    1998-01-01

    The Annual Report 1997 is the review of scientific activities of the Department of Nuclear Physics and Techniques (DNPT) of the Academy of Mining and Metallurgy, Cracow. The studies connected with: radiometric analysis, nuclear electronics, solid state physics, elementary particle and detectors, medical physics, physics of environment, theoretical physics, nuclear geophysics, energetic problems, industrial radiometry and tracer techniques have been broadly presented. The fill list of works being published and presented at scientific conferences in 1997 by the staff of DNPT are also included

  10. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  11. Standard Model Physics at the LHC

    CERN Document Server

    CERN. Geneva

    1999-01-01

    1. Top Physics : Single top production and top polarization, D. O'Neil. Top mass determination, spin correlations and t-tbar asymmetries, L. Sonnenschein. FCNC-induced production and decays, S. Slabospitsky. MC tools for signals and backgrounds, M. Mangano. Plans for the writing of the final report, Conveners. Top physics: Discussion. 2. Electroweak physics (cont.) : Anomalous triple gauge boson couplings: analysis, strategies, and form factor considerations, M. Dobbs. Sensitivity to anomalous triple gauge boson couplings, W. Thuemmel. Drell-Yan production of W,Z with electroweak corrections, S. Dittmaier. Vector boson self-couplings and effective field theory, J.R. Pelaez. Recent theoretical progress, Z. Kunszt. Electroweak physics: Discussion. Recent theoretical progress in b production, G. Ridolf. Studies on b production, S. Gennai. Comparison of most recent b-production theoretical computations with PYTHIA, A. Kharchilava. Possibilities for b production measurements, P. Vikas. B production: Discussion....

  12. B physics in the standard model

    International Nuclear Information System (INIS)

    Takasugi, Eiichi

    1985-01-01

    Before discussing the beauty physics, the present status of the quark mixing is reviewed. Then the CP violation in the K meson physics is discussed. As for the quark mixing, it is concluded that the theroretical analysis of CP violation involves various uncertainties and it seems difficult to obtain the definite information of the quark mixing. As for the B physics, B 0 - anti B 0 mixing and some hopeful methods to detect the CP violation in the B system are discussed along with the two typical ways to measure it. In summary, it is concluded that the B 0 - anti B 0 mixing should be observed, but some luck is needed to observe the CP violation in the B physics. (Aoki, K.)

  13. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  14. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  15. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  16. Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques

    Science.gov (United States)

    Spencer, E.; Rao, A.; Horton, W.; Mays, L.

    2008-12-01

    ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201

  17. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  18. The influence of instructional interactions on students’ mental models about the quantization of physical observables: a modern physics course case

    Science.gov (United States)

    Didiş Körhasan, Nilüfer; Eryılmaz, Ali; Erkoç, Şakir

    2016-01-01

    Mental models are coherently organized knowledge structures used to explain phenomena. They interact with social environments and evolve with the interaction. Lacking daily experience with phenomena, the social interaction gains much more importance. In this part of our multiphase study, we investigate how instructional interactions influenced students’ mental models about the quantization of physical observables. Class observations and interviews were analysed by studying students’ mental models constructed in a modern physics course during an academic semester. The research revealed that students’ mental models were influenced by (1) the manner of teaching, including instructional methodologies and content specific techniques used by the instructor, (2) order of the topics and familiarity with concepts, and (3) peers.

  19. Standard model parameters and the search for new physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs

  20. MIANN models in medicinal, physical and organic chemistry.

    Science.gov (United States)

    González-Díaz, Humberto; Arrasate, Sonia; Sotomayor, Nuria; Lete, Esther; Munteanu, Cristian R; Pazos, Alejandro; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    Reducing costs in terms of time, animal sacrifice, and material resources with computational methods has become a promising goal in Medicinal, Biological, Physical and Organic Chemistry. There are many computational techniques that can be used in this sense. In any case, almost all these methods focus on few fundamental aspects including: type (1) methods to quantify the molecular structure, type (2) methods to link the structure with the biological activity, and others. In particular, MARCH-INSIDE (MI), acronym for Markov Chain Invariants for Networks Simulation and Design, is a well-known method for QSAR analysis useful in step (1). In addition, the bio-inspired Artificial-Intelligence (AI) algorithms called Artificial Neural Networks (ANNs) are among the most powerful type (2) methods. We can combine MI with ANNs in order to seek QSAR models, a strategy which is called herein MIANN (MI & ANN models). One of the first applications of the MIANN strategy was in the development of new QSAR models for drug discovery. MIANN strategy has been expanded to the QSAR study of proteins, protein-drug interactions, and protein-protein interaction networks. In this paper, we review for the first time many interesting aspects of the MIANN strategy including theoretical basis, implementation in web servers, and examples of applications in Medicinal and Biological chemistry. We also report new applications of the MIANN strategy in Medicinal chemistry and the first examples in Physical and Organic Chemistry, as well. In so doing, we developed new MIANN models for several self-assembly physicochemical properties of surfactants and large reaction networks in organic synthesis. In some of the new examples we also present experimental results which were not published up to date.

  1. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  2. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  3. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  4. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  5. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    Science.gov (United States)

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  6. Evaluation of conformal radiotherapy techniques through physics and biologic criteria; Avaliacao de tecnicas radioterapicas conformacionais utilizando criterios fisicos e biologicos

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Jonatas Carrero

    2012-07-01

    In the fight against cancer, different irradiation techniques have been developed based on technological advances and aiming to optimize the elimination of tumor cells with the lowest damage to healthy tissues. The radiotherapy planning goal is to establish irradiation technical parameters in order to achieve the prescribed dose distribution over the treatment volumes. While dose prescription is based on radiosensitivity of the irradiated tissues, the physical calculations on treatment planning take into account dosimetric parameters related to the radiation beam and the physical characteristics of the irradiated tissues. To incorporate tissue's radiosensitivity into radiotherapy planning calculations can help particularize treatments and establish criteria to compare and elect radiation techniques, contributing to the tumor control and the success of the treatment. Accordingly, biological models of cellular response to radiation have to be well established. This work aimed to study the applicability of using biological models in radiotherapy planning calculations to aid evaluating radiotherapy techniques. Tumor control probability (TCP) was studied for two formulations of the linear-quadratic model, with and without repopulation, as a function of planning parameters, as dose per fraction, and of radiobiological parameters, as the α/β ratio. Besides, the usage of biological criteria to compare radiotherapy techniques was tested using a prostate planning simulated with Monte Carlo code PENELOPE. Afterwards, prostate planning for five patients from the Hospital das Clinicas da Faculdade de Medicina de Ribeirao Preto, USP, using three different techniques were compared using the tumor control probability. In that order, dose matrices from the XiO treatment planning system were converted to TCP distributions and TCP-volume histograms. The studies performed allow the conclusions that radiobiological parameters can significantly influence tumor control

  7. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  8. Physics properties of TiO_2 films produced by dip-coating technique

    International Nuclear Information System (INIS)

    Teloeken, A.C.; Alves, A.K.; Berutti, F.A.; Tabarelli, A.; Bergmann, C.P.

    2014-01-01

    The use of titanium dioxide (TiO_2) as a photocatalyst to produce hydrogen has been of great interest because of their chemical stability, low cost and non-toxicity. TiO_2 occurs in three different crystal forms: rutile, anatase and brokita. Among these, the anatase phase generally exhibits the best photocatalytic behavior, while the rutile phase is the most stable. Among the various techniques of deposition, dip-coating technique produces films with good photocatalytic properties, using simple and inexpensive equipment. In this work TiO_2 films were obtained by dip-coating. The films were characterized using X-ray diffraction, scanning electron microscopy, profilometry, contact angle measurements and photocurrent. The microstructure and physical properties were evaluated in relation of the temperature and the addition of an additive. (author)

  9. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  10. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  11. Particle modeling of plasmas computational plasma physics

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1991-01-01

    Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world

  12. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  13. Physical Modelling of Geotechnical Structures in Ports and Offshore

    Directory of Open Access Journals (Sweden)

    Bałachowski Lech

    2017-04-01

    Full Text Available The physical modelling of subsoil behaviour and soil-structure interaction is essential for the proper design of offshore structures and port infrastructure. A brief introduction to such modelling of geoengineering problems is presented and some methods and experimental devices are described. The relationships between modelling scales are given. Some examples of penetration testing results in centrifuge and calibration chamber are presented. Prospects for physical modelling in geotechnics are also described.

  14. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  15. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  16. Results of Using the Take-Away Technique on Students' Achievements and Attitudes in High School Physics and Physical Science Courses

    Science.gov (United States)

    Carifio, James; Doherty, Michael

    2012-01-01

    The Take-away Technique was used in High School Physics and Physical Science courses for the unit on Newtonian mechanics in a teacher (6) by grade level (4) partially crossed design (N = 272). All classes received the same IE instructional treatment. The experimental group (classrooms) did a short Take-away after each class summarizing the key…

  17. Concepts and models in particle physics

    International Nuclear Information System (INIS)

    Paty, M.

    1977-01-01

    The knowledge of Elementary Particle Physics is characterized by an object and a purpose which are both highly theoretical. This assessment is shown and analysed by some examples taken in recent achievements in the field. It is also tried to attempt an enonciation of some criteria of the reality for concepts and objects in this matter [fr

  18. Extracting physics from the lattice higgs model

    International Nuclear Information System (INIS)

    Neuberger, H.

    1988-05-01

    The relevance and usefulness of lattice /phi/ 4 for particle physics is discussed from older and newer points of view. The talk will start with a review of the main ideas and suggestions in my work in the past with Dashen and will proceed to present newer developments both on the conceptual and the practical level. 12 refs

  19. An Amotivation Model in Physical Education

    Science.gov (United States)

    Shen, Bo; Wingert, Robert K.; Li, Weidong; Sun, Haichun; Rukavina, Paul Bernard

    2010-01-01

    Amotivation refers to a state in which individuals cannot perceive a relationship between their behavior and that behavior's subsequent outcome. With the belief that considering amotivation as a multidimensional construct could reflect the complexity of motivational deficits in physical education, we developed this study to validate an amotivation…

  20. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Science.gov (United States)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel

  1. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  2. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  3. A novel low-power fluxgate sensor using a macroscale optimisation technique for space physics instrumentation

    Science.gov (United States)

    Dekoulis, G.; Honary, F.

    2007-05-01

    This paper describes the design of a novel low-power single-axis fluxgate sensor. Several soft magnetic alloy materials have been considered and the choice was based on the balance between maximum permeability and minimum saturation flux density values. The sensor has been modelled using the Finite Integration Theory (FIT) method. The sensor was imposed to a custom macroscale optimisation technique that significantly reduced the power consumption by a factor of 16. The results of the sensor's optimisation technique will be used, subsequently, in the development of a cutting-edge ground based magnetometer for the study of the complex solar wind-magnetospheric-ionospheric system.

  4. Physical models of biological information and adaptation.

    Science.gov (United States)

    Stuart, C I

    1985-04-07

    The bio-informational equivalence asserts that biological processes reduce to processes of information transfer. In this paper, that equivalence is treated as a metaphor with deeply anthropomorphic content of a sort that resists constitutive-analytical definition, including formulation within mathematical theories of information. It is argued that continuance of the metaphor, as a quasi-theoretical perspective in biology, must entail a methodological dislocation between biological and physical science. It is proposed that a general class of functions, drawn from classical physics, can serve to eliminate the anthropomorphism. Further considerations indicate that the concept of biological adaptation is central to the general applicability of the informational idea in biology; a non-anthropomorphic treatment of adaptive phenomena is suggested in terms of variational principles.

  5. Physical models of semiconductor quantum devices

    CERN Document Server

    Fu, Ying

    2013-01-01

    The science and technology relating to nanostructures continues to receive significant attention for its applications to various fields including microelectronics, nanophotonics, and biotechnology. This book describes the basic quantum mechanical principles underlining this fast developing field. From the fundamental principles of quantum mechanics to nanomaterial properties, from device physics to research and development of new systems, this title is aimed at undergraduates, graduates, postgraduates, and researchers.

  6. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  7. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  8. Model of future officers' availability to the management physical training

    Directory of Open Access Journals (Sweden)

    Olkhovy O.M.

    2012-03-01

    Full Text Available A purpose of work is creation of model of readiness of graduating student to implementation of official questions of guidance, organization and leadthrough of physical preparation in the process of military-professional activity. An analysis is conducted more than 40 sources and questionnaire questioning of a 21 expert. For introduction of model to the system of physical preparation of students the list of its basic constituents is certain: theoretical methodical readiness; functionally-physical readiness; organizationally-administrative readiness. It is certain that readiness of future officers to military-professional activity foresees determination of level of forming of motive capabilities, development of general physical qualities.

  9. Comparison Study on Low Energy Physics Model of GEANT4

    International Nuclear Information System (INIS)

    Park, So Hyun; Jung, Won Gyun; Suh, Tae Suk

    2010-01-01

    The Geant4 simulation toolkit provides improved or renewed physics model according to the version. The latest Geant4.9.3 which has been recoded by developers applies inserted Livermore data and renewed physics model to the low energy electromagnetic physics model. And also, Geant4.9.3 improved the physics factors by modified code. In this study, the stopping power and CSDA(Continuously Slowing Down Approximation) range data of electron or particles were acquired in various material and then, these data were compared with NIST(National Institute of Standards and Technology) data. Through comparison between data of Geant4 simulation and NIST, the improvement of physics model on low energy electromagnetic of Geant4.9.3 was evaluated by comparing the Geant4.9.2

  10. New Physical and Mathematical Model of Radiation Heat Transmission Inside Circular Furnace

    Directory of Open Access Journals (Sweden)

    V. I. Timoshpolsky

    2004-01-01

    Full Text Available Methods of solving problems concerning heat transmission by radiation are considered in the paper. The paper shows disadvantages of the existing techniques. A physical and mathematical model of a conjugate heat exchange has been developed to eliminate the above disadvantages.

  11. A Model of the Creative Process Based on Quantum Physics and Vedic Science.

    Science.gov (United States)

    Rose, Laura Hall

    1988-01-01

    Using tenets from Vedic science and quantum physics, this model of the creative process suggests that the unified field of creation is pure consciousness, and that the development of the creative process within individuals mirrors the creative process within the universe. Rational and supra-rational creative thinking techniques are also described.…

  12. Physical startup tests for VVER-1200 of Novovoronezh NPP. Advanced technique and some results

    Energy Technology Data Exchange (ETDEWEB)

    Afanasiev, Dmitry A.; Kraynov, Yury A.; Pinegin, Anatoly A.; Tsyganov, Sergey V. [National Research Centre, Moscow (Russian Federation). Kurchatov Inst.

    2017-09-15

    The intention of the startup physics tests was to confirm design characteristics of the core loading and their compliance with safety analysis preconditions. The program of startup tests for the leading unit is usually composed in such a way that is is possible to study as much neutron-physical characteristics as possible in the safest condition of zero power. State-of-the-art safety analysis is including computer codes that use three dimensional neutron kinetics and thermohydraulics models. For the substantiation of such models, for its validation and verification there is a need in reactor experiments that implementing spatially distributed transients. We based on such statements when composing hot zero power physical startup program for the new VVER-1200 unit of Novovoronezh NPP. Several tests unconventional for VVER were developed for that program. It includes measuring the worth for each of control rod groups and measuring of single rod worth from the inserted groups - test that models rod ejection event in some sense.

  13. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  14. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  15. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  16. Physical modeling of joule heated ceramic glass melters for high level waste immobilization

    International Nuclear Information System (INIS)

    Quigley, M.S.; Kreid, D.K.

    1979-03-01

    This study developed physical modeling techniques and apparatus suitable for experimental analysis of joule heated ceramic glass melters designed for immobilizing high level waste. The physical modeling experiments can give qualitative insight into the design and operation of prototype furnaces and, if properly verified with prototype data, the physical models could be used for quantitative analysis of specific furnaces. Based on evaluation of the results of this study, it is recommended that the following actions and investigations be undertaken: It was not shown that the isothermal boundary conditions imposed by this study established prototypic heat losses through the boundaries of the model. Prototype wall temperatures and heat fluxes should be measured to provide better verification of the accuracy of the physical model. The VECTRA computer code is a two-dimensional analytical model. Physical model runs which are isothermal in the Y direction should be made to provide two-dimensional data for more direct comparison to the VECTRA predictions. The ability of the physical model to accurately predict prototype operating conditions should be proven before the model can become a reliable design tool. This will require significantly more prototype operating and glass property data than were available at the time of this study. A complete set of measurements covering power input, heat balances, wall temperatures, glass temperatures, and glass properties should be attempted for at least one prototype run. The information could be used to verify both physical and analytical models. Particle settling and/or sludge buildup should be studied directly by observing the accumulation of the appropriate size and density particles during feeding in the physical model. New designs should be formulated and modeled to minimize the potential problems with melter operation identifed by this study

  17. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  18. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  19. Physical and mathematical models of communication systems

    International Nuclear Information System (INIS)

    Verkhovskaya, E.P.; Yavorskij, V.V.

    2006-01-01

    The theoretical parties connecting resources of communication system with characteristics of channels are received. The model of such systems from positions quasi-classical thermodynamics is considered. (author)

  20. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  1. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  2. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  3. Nuclear measurements, techniques and instrumentation industrial applications plasma physics and nuclear fusion. 1980-1994. International Atomic Energy Agency publications

    International Nuclear Information System (INIS)

    1995-04-01

    This catalogue lists all sales publications of the International Atomic Energy Agency dealing with Nuclear Measurements, Techniques and Instrumentation, with Industrial Applications (of Nuclear Physics and Engineering), and with Plasma Physics and Nuclear Fusion, issued during the period 1980-1994. Most publications are in English. Proceedings of conferences, symposia, and panels of experts may contain some papers in other languages (French, Russian, or Spanish), but all papers have abstracts in English. Price quotes are in Austrian Schillings, do not include local taxes, and are subject to change without notice. Contents cover the three main categories of (i) Nuclear Measurements, Techniques and Instrumentation (Physics, Chemistry, Dosimetry Techniques, Nuclear Analytical Techniques, Research Reactors and Particle Accelerator Applications, Nuclear Data); (ii) Industrial Applications (Radiation Processing, Radiometry, Tracers); and (iii) Plasma Physics and Nuclear Fusion

  4. Nuclear measurements, techniques and instrumentation industrial applications plasma physics and nuclear fusion, 1980-1993. International Atomic Energy Agency publications

    International Nuclear Information System (INIS)

    1994-01-01

    This catalogue lists all sales publications of the International Atomic Energy Agency dealing with Nuclear Measurements, Techniques and Instrumentation, with Industrial Applications (of Nuclear Physics and Engineering), and with Plasma Physics and Nuclear Fusion, issued during the period 1980-1993. Most publications are in English. Proceedings of conferences, symposia, and panels of experts may contain some papers in other languages (French, Russian, or Spanish), but all papers have abstracts in English. Price quotes are in Austrian Schillings, do not include local taxes, and are subject to change without notice. Contents cover the three main categories of (I) Nuclear Measurements, Techniques and Instrumentation (Physics, Chemistry, Dosimetry Techniques, Nuclear Analytical Techniques, Research Reactors and Particle Accelerator Applications, Nuclear Data); (ii) Industrial Applications (Radiation Processing, Radiometry, Tracers); and (iii) Plasma Physics and Nuclear Fusion

  5. Searching for Physics Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Catterall, Simon [Syracuse Univ., NY (United States)

    2016-12-01

    This final report summarizes the work carried out by the Syracuse component of a multi-institutional SciDAC grant led by USQCD. This grant supported software development for theoretical high energy physics. The Syracuse component specifically targeted the development of code for the numerical simulation of N=4 super Yang-Mills theory. The work described in the final report includes this and a summary of results achieve in exploring the structure of this theory. It also describes the personnel - students and a postdoc who were directly or indirectly involved in this project. A list of publication is also described.

  6. A physical model of the evaporating meniscus

    International Nuclear Information System (INIS)

    Mirzamoghadam, A.; Catton, I.

    1985-01-01

    Transport phenomena associated with the heating of a saturated stationary fluid near saturation by an inclined, partially submerged copper plate was studied analytically. Under steady state evaporation, the meniscus profile was derived using an appropriate liquid film velocity and temperature distribution in an integral approach. The solution was then back-substituted in order to identify regions of influence of various physical phenomena given the fluid properties, wall superheat and plate tilt. The degree of superheat and wall tilt were seen to control instability in the meniscus. This instability, connected to the experimental observation of meniscus oscillation, was credited to contributions by liquid inertia and Marangoni convection

  7. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1978-01-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  8. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1979-02-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  9. Learning about physical parameters: the importance of model discrepancy

    International Nuclear Information System (INIS)

    Brynjarsdóttir, Jenný; O'Hagan, Anthony

    2014-01-01

    Science-based simulation models are widely used to predict the behavior of complex physical systems. It is also common to use observations of the physical system to solve the inverse problem, that is, to learn about the values of parameters within the model, a process which is often called calibration. The main goal of calibration is usually to improve the predictive performance of the simulator but the values of the parameters in the model may also be of intrinsic scientific interest in their own right. In order to make appropriate use of observations of the physical system it is important to recognize model discrepancy, the difference between reality and the simulator output. We illustrate through a simple example that an analysis that does not account for model discrepancy may lead to biased and over-confident parameter estimates and predictions. The challenge with incorporating model discrepancy in statistical inverse problems is being confounded with calibration parameters, which will only be resolved with meaningful priors. For our simple example, we model the model-discrepancy via a Gaussian process and demonstrate that through accounting for model discrepancy our prediction within the range of data is correct. However, only with realistic priors on the model discrepancy do we uncover the true parameter values. Through theoretical arguments we show that these findings are typical of the general problem of learning about physical parameters and the underlying physical system using science-based mechanistic models. (paper)

  10. Engineered Barrier System: Physical and Chemical Environment Model

    International Nuclear Information System (INIS)

    Jolley, D. M.; Jarek, R.; Mariner, P.

    2004-01-01

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming by deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports

  11. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  12. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  13. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  14. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  15. Physical evaluations of Co-Cr-Mo parts processed using different additive manufacturing techniques

    Science.gov (United States)

    Ghani, Saiful Anwar Che; Mohamed, Siti Rohaida; Harun, Wan Sharuzi Wan; Noar, Nor Aida Zuraimi Md

    2017-12-01

    In recent years, additive manufacturing with highly design customization has gained an important technique for fabrication in aerospace and medical fields. Despite the ability of the process to produce complex components with highly controlled architecture geometrical features, maintaining the part's accuracy, ability to fabricate fully functional high density components and inferior surfaces quality are the major obstacles in producing final parts using additive manufacturing for any selected application. This study aims to evaluate the physical properties of cobalt chrome molybdenum (Co-Cr-Mo) alloys parts fabricated by different additive manufacturing techniques. The full dense Co-Cr-Mo parts were produced by Selective Laser Melting (SLM) and Direct Metal Laser Sintering (DMLS) with default process parameters. The density and relative density of samples were calculated using Archimedes' principle while the surface roughness on the top and side surface was measured using surface profiler. The roughness average (Ra) for top surface for SLM produced parts is 3.4 µm while 2.83 µm for DMLS produced parts. The Ra for side surfaces for SLM produced parts is 4.57 µm while 9.0 µm for DMLS produced parts. The higher Ra values on side surfaces compared to the top faces for both manufacturing techniques was due to the balling effect phenomenon. The yield relative density for both Co-Cr-Mo parts produced by SLM and DMLS are 99.3%. Higher energy density has influence the higher density of produced samples by SLM and DMLS processes. The findings of this work demonstrated that SLM and DMLS process with default process parameters have effectively produced full dense parts of Co-Cr-Mo with high density, good agreement of geometrical accuracy and better surface finish. Despite of both manufacturing process yield that produced components with higher density, the current finding shows that SLM technique could produce components with smoother surface quality compared to DMLS

  16. Towards LHC physics with nonlocal Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Tirthabir, E-mail: tbiswas@loyno.edu [Department of Physics, Loyola University, 6363 St. Charles Avenue, Box 92, New Orleans, LA 70118 (United States); Okada, Nobuchika, E-mail: okadan@ua.edu [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487-0324 (United States)

    2015-09-15

    We take a few steps towards constructing a string-inspired nonlocal extension of the Standard Model. We start by illustrating how quantum loop calculations can be performed in nonlocal scalar field theory. In particular, we show the potential to address the hierarchy problem in the nonlocal framework. Next, we construct a nonlocal abelian gauge model and derive modifications of the gauge interaction vertex and field propagators. We apply the modifications to a toy version of the nonlocal Standard Model and investigate collider phenomenology. We find the lower bound on the scale of nonlocality from the 8 TeV LHC data to be 2.5–3 TeV.

  17. Darwin model in plasma physics revisited

    International Nuclear Information System (INIS)

    Xie, Huasheng; Zhu, Jia; Ma, Zhiwei

    2014-01-01

    Dispersion relations from the Darwin (a.k.a., magnetoinductive or magnetostatic) model are given and compared with those of the full electromagnetic model. Analytical and numerical solutions show that the errors from the Darwin approximation can be large even if phase velocity for a low-frequency wave is close to or larger than the speed of light. Besides missing two wave branches associated mainly with the electron dynamics, the coupling branch of the electrons and ions in the Darwin model is modified to become a new artificial branch that incorrectly represents the coupling dynamics of the electrons and ions. (paper)

  18. Principles of Physical Modelling of Unsaturated Soils

    OpenAIRE

    CAICEDO, Bernardo; THOREL, Luc

    2014-01-01

    Centrifuge modelling has been widely used to simulate the performance of a variety of geotechnical works, most of them focusing on saturated clays or dry sands. On the other hand, the performance of some geotechnical works depends on the behaviour of shallow layers in the soil deposit where it is frequently unsaturated. Centrifuge modelling could be a powerful tool to study the performance of shallow geotechnical works. However all the experimental complexities related to unsaturated soils, w...

  19. A physics department's role in preparing physics teachers: The Colorado learning assistant model

    Science.gov (United States)

    Otero, Valerie; Pollock, Steven; Finkelstein, Noah

    2010-11-01

    In response to substantial evidence that many U.S. students are inadequately prepared in science and mathematics, we have developed an effective and adaptable model that improves the education of all students in introductory physics and increases the numbers of talented physics majors becoming certified to teach physics. We report on the Colorado Learning Assistant model and discuss its effectiveness at a large research university. Since its inception in 2003, we have increased the pool of well-qualified K-12 physics teachers by a factor of approximately three, engaged scientists significantly in the recruiting and preparation of future teachers, and improved the introductory physics sequence so that students' learning gains are typically double the traditional average.

  20. Searches for physics beyond the Standard Model at the Tevatron

    Indian Academy of Sciences (India)

    Publications ... Beyond Standard Model Physics Volume 79 Issue 4 October 2012 pp 703-717 ... a centre-of-mass energy of 1.96 TeV that the CDF and DO Collaborations have scrutinized looking for new physics in a wide range of final states.

  1. The Dawn of physics beyond the standard model

    CERN Multimedia

    Kane, Gordon

    2003-01-01

    "The Standard Model of particle physics is at a pivotal moment in its history: it is both at the height of its success and on the verge of being surpassed [...] A new era in particle physics could soon be heralded by the detection of supersymmetric particles at the Tevatron collider at Fermi National Accelerator Laboratory in Batavia, Ill." (8 pages)

  2. Simple suggestions for including vertical physics in oil spill models

    International Nuclear Information System (INIS)

    D'Asaro, Eric; University of Washington, Seatle, WA

    2001-01-01

    Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)

  3. The Standard Model and Higgs physics

    Science.gov (United States)

    Torassa, Ezio

    2018-05-01

    The Standard Model is a consistent and computable theory that successfully describes the elementary particle interactions. The strong, electromagnetic and weak interactions have been included in the theory exploiting the relation between group symmetries and group generators, in order to smartly introduce the force carriers. The group properties lead to constraints between boson masses and couplings. All the measurements performed at the LEP, Tevatron, LHC and other accelerators proved the consistency of the Standard Model. A key element of the theory is the Higgs field, which together with the spontaneous symmetry breaking, gives mass to the vector bosons and to the fermions. Unlike the case of vector bosons, the theory does not provide prediction for the Higgs boson mass. The LEP experiments, while providing very precise measurements of the Standard Model theory, searched for the evidence of the Higgs boson until the year 2000. The discovery of the top quark in 1994 by the Tevatron experiments and of the Higgs boson in 2012 by the LHC experiments were considered as the completion of the fundamental particles list of the Standard Model theory. Nevertheless the neutrino oscillations, the dark matter and the baryon asymmetry in the Universe evidence that we need a new extended model. In the Standard Model there are also some unattractive theoretical aspects like the divergent loop corrections to the Higgs boson mass and the very small Yukawa couplings needed to describe the neutrino masses. For all these reasons, the hunt of discrepancies between Standard Model and data is still going on with the aim to finally describe the new extended theory.

  4. Algebraic fermion models and nuclear structure physics

    International Nuclear Information System (INIS)

    Troltenier, Dirk; Blokhin, Andrey; Draayer, Jerry P.; Rompf, Dirk; Hirsch, Jorge G.

    1996-01-01

    Recent experimental and theoretical developments are generating renewed interest in the nuclear SU(3) shell model, and this extends to the symplectic model, with its Sp(6,R) symmetry, which is a natural multi-(ℎ/2π)ω extension of the SU(3) theory. First and foremost, an understanding of how the dynamics of a quantum rotor is embedded in the shell model has established it as the model of choice for describing strongly deformed systems. Second, the symplectic model extension of the 0-(ℎ/2π)ω theory can be used to probe additional degrees of freedom, like core polarization and vorticity modes that play a key role in providing a full description of quadrupole collectivity. Third, the discovery and understanding of pseudo-spin has allowed for an extension of the theory from light (A≤40) to heavy (A≥100) nuclei. Fourth, a user-friendly computer code for calculating reduced matrix elements of operators that couple SU(3) representations is now available. And finally, since the theory is designed to cope with deformation in a natural way, microscopic features of deformed systems can be probed; for example, the theory is now being employed to study double beta decay and thereby serves to probe the validity of the standard model of particles and their interactions. A subset of these topics will be considered in this course--examples cited include: a consideration of the origin of pseudo-spin symmetry; a SU(3)-based interpretation of the coupled-rotor model, early results of double beta decay studies; and some recent developments on the pseudo-SU(3) theory. Nothing will be said about other fermion-based theories; students are referred to reviews in the literature for reports on developments in these related areas

  5. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  6. Analysis of EBR-II neutron and photon physics by multidimensional transport-theory techniques

    International Nuclear Information System (INIS)

    Jacqmin, R.P.; Finck, P.J.; Palmiotti, G.

    1994-01-01

    This paper contains a review of the challenges specific to the EBR-II core physics, a description of the methods and techniques which have been developed for addressing these challenges, and the results of some validation studies relative to power-distribution calculations. Numerical tests have shown that the VARIANT nodal code yields eigenvalue and power predictions as accurate as finite difference and discrete ordinates transport codes, at a small fraction of the cost. Comparisons with continuous-energy Monte Carlo results have proven that the errors introduced by the use of the diffusion-theory approximation in the collapsing procedure to obtain broad-group cross sections, kerma factors, and photon-production matrices, have a small impact on the EBR-II neutron/photon power distribution

  7. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  8. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  10. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  11. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  12. Ultrasonic techniques for measuring physical properties of fluids in harsh environments

    Science.gov (United States)

    Pantea, Cristian

    Ultrasonic-based measurement techniques, either in the time domain or in the frequency domain, include a wide range of experimental methods for investigating physical properties of materials. This discussion is specifically focused on ultrasonic methods and instrumentation development for the determination of liquid properties at conditions typically found in subsurface environments (in the U.S., more than 80% of total energy needs are provided by subsurface energy sources). Such sensors require materials that can withstand harsh conditions of high pressure, high temperature and corrosiveness. These include the piezoelectric material, electrically conductive adhesives, sensor housings/enclosures, and the signal carrying cables, to name a few. A complete sensor package was developed for operation at high temperatures and pressures characteristic to geothermal/oil-industry reservoirs. This package is designed to provide real-time, simultaneous measurements of multiple physical parameters, such as temperature, pressure, salinity and sound speed. The basic principle for this sensor's operation is an ultrasonic frequency domain technique, combined with transducer resonance tracking. This multipurpose acoustic sensor can be used at depths of several thousand meters, temperatures up to 250 °C, and in a very corrosive environment. In the context of high precision measurement of sound speed, the determination of acoustic nonlinearity of liquids will also be discussed, using two different approaches: (i) the thermodynamic method, in which precise and accurate frequency domain sound speed measurements are performed at high pressure and high temperature, and (ii) a modified finite amplitude method, requiring time domain measurements of the second harmonic at room temperature. Efforts toward the development of an acoustic source of collimated low-frequency (10-150 kHz) beam, with applications in imaging, will also be presented.

  13. Simple mathematical models of symmetry breaking. Application to particle physics

    International Nuclear Information System (INIS)

    Michel, L.

    1976-01-01

    Some mathematical facts relevant to symmetry breaking are presented. A first mathematical model deals with the smooth action of compact Lie groups on real manifolds, a second model considers linear action of any group on real or complex finite dimensional vector spaces. Application of the mathematical models to particle physics is considered. (B.R.H.)

  14. Standard model Higgs physics at colliders

    International Nuclear Information System (INIS)

    Rosca, A.

    2007-01-01

    In this report we briefly review the experimental status and prospects to verify the Higgs mechanism of spontaneous symmetry breaking. The focus is on the most relevant aspects of the phenomenology of the Standard Model Higgs boson at current (Tevatron) and future (Large Hadron Collider, LHC and International Linear Collider, ILC) particle colliders. We review the Standard Model searches: searches at the Tevatron, the program planned at the LHC and prospects at the ILC. Emphasis is put on what follows after a candidate discovery at the LHC: the various measurements which are necessary to precisely determine what the properties of this Higgs candidate are. (author)

  15. Physics-Based Pneumatic Hammer Instability Model, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to develop a physics-based pneumatic hammer instability model that accurately predicts the stability of hydrostatic bearings...

  16. Overview of the Higgs and Standard Model physics at ATLAS

    CERN Document Server

    Vazquez Schroeder, Tamara; The ATLAS collaboration

    2018-01-01

    This talk presents selected aspects of recent physics results from the ATLAS collaboration in the Standard Model and Higgs sectors, with a focus on the recent evidence for the associated production of the Higgs boson and a top quark pair.

  17. Can plane wave modes be physical modes in soliton models?

    International Nuclear Information System (INIS)

    Aldabe, F.

    1995-08-01

    I show that plane waves may not be used as asymptotic states in soliton models because they describe unphysical states. When asymptotic states are taken to the physical there is not T-matrix of O(1). (author). 9 refs

  18. Physical characterization and kinetic modelling of matrix tablets of ...

    African Journals Online (AJOL)

    release mechanisms were characterized by kinetic modeling. Analytical ... findings demonstrate that both the desired physical characteristics and drug release profiles were obtained ..... on the compression, mechanical, and release properties.

  19. Standard model status (in search of ''new physics'')

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1993-03-01

    A perspective on successes and shortcomings of the standard model is given. The complementarity between direct high energy probes of new physics and lower energy searches via precision measurements and rare reactions is described. Several illustrative examples are discussed

  20. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  1. Data re-arranging techniques leading to proper variable selections in high energy physics

    Science.gov (United States)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  2. Assessment of Robotic Patient Simulators for Training in Manual Physical Therapy Examination Techniques

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719

  3. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Directory of Open Access Journals (Sweden)

    Shun Ishikawa

    Full Text Available Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  4. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  5. Physically-Based Interactive Flow Visualization Based on Schlieren and Interferometry Experimental Techniques

    KAUST Repository

    Brownlee, C.

    2011-11-01

    Understanding fluid flow is a difficult problem and of increasing importance as computational fluid dynamics (CFD) produces an abundance of simulation data. Experimental flow analysis has employed techniques such as shadowgraph, interferometry, and schlieren imaging for centuries, which allow empirical observation of inhomogeneous flows. Shadowgraphs provide an intuitive way of looking at small changes in flow dynamics through caustic effects while schlieren cutoffs introduce an intensity gradation for observing large scale directional changes in the flow. Interferometry tracks changes in phase-shift resulting in bands appearing. The combination of these shading effects provides an informative global analysis of overall fluid flow. Computational solutions for these methods have proven too complex until recently due to the fundamental physical interaction of light refracting through the flow field. In this paper, we introduce a novel method to simulate the refraction of light to generate synthetic shadowgraph, schlieren and interferometry images of time-varying scalar fields derived from computational fluid dynamics data. Our method computes physically accurate schlieren and shadowgraph images at interactive rates by utilizing a combination of GPGPU programming, acceleration methods, and data-dependent probabilistic schlieren cutoffs. Applications of our method to multifield data and custom application-dependent color filter creation are explored. Results comparing this method to previous schlieren approximations are finally presented. © 2011 IEEE.

  6. Model uncertainties in top-quark physics

    CERN Document Server

    Seidel, Markus

    2014-01-01

    The ATLAS and CMS collaborations at the Large Hadron Collider (LHC) are studying the top quark in pp collisions at 7 and 8 TeV. Due to the large integrated luminosity, precision measurements of production cross-sections and properties are often limited by systematic uncertainties. An overview of the modeling uncertainties for simulated events is given in this report.

  7. Introduction to physics beyond the Standard Model

    CERN Document Server

    Giudice, Gian Francesco

    1998-01-01

    These lectures will give an introductory review of the main ideas behind the attempts to extend the standard-model description of elementary particle interactions. After analysing the conceptual motivations that lead us to blieve in the existence of an underlying fundamental theory, wi will discuss the present status of various theoretical constructs : grand unification, supersymmetry and technicolour.

  8. A new cerebral vasospasm model established with endovascular puncture technique

    International Nuclear Information System (INIS)

    Tu Jianfei; Liu Yizhi; Ji Jiansong; Zhao Zhongwei

    2011-01-01

    Objective: To investigate the method of establishing cerebral vasospasm (CVS) models in rabbits by using endovascular puncture technique. Methods: Endovascular puncture procedure was performed in 78 New Zealand white rabbits to produce subarachnoid hemorrhage (SAH). The survival rabbits were randomly divided into seven groups (3 h, 12 h, 1 d, 2 d, 3 d, 7 d and 14 d), with five rabbits in each group for both study group (SAH group) and control group. Cerebral CT scanning was carried out in all rabbits both before and after the operation. The inner diameter and the thickness of vascular wall of both posterior communicating artery (PcoA) and basilar artery (BA) were determined after the animals were sacrificed, and the results were analyzed. Results: Of 78 experimental rabbits, CVS model was successfully established in 45, including 35 of SAH group and 10 control subgroup. The technical success rate was 57.7%. Twelve hours after the procedure, the inner diameter of PcoA and BA in SAH group was decreased by 45.6% and 52.3%, respectively, when compared with these in control group. The vascular narrowing showed biphasic changes, the inner diameter markedly decreased again at the 7th day when the decrease reached its peak to 31.2% and 48.6%, respectively. Conclusion: Endovascular puncture technique is an effective method to establish CVS models in rabbits. The death rate of experimental animals can be decreased if new interventional material is used and the manipulation is carefully performed. (authors)

  9. A Comparison between Different Meta-Heuristic Techniques in Power Allocation for Physical Layer Security

    Directory of Open Access Journals (Sweden)

    N. Okati

    2017-12-01

    Full Text Available Node cooperation can protect wireless networks from eavesdropping by using the physical characteristics of wireless channels rather than cryptographic methods. Allocating the proper amount of power to cooperative nodes is a challenging task. In this paper, we use three cooperative nodes, one as relay to increase throughput at the destination and two friendly jammers to degrade eavesdropper’s link. For this scenario, the secrecy rate function is a non-linear non-convex problem. So, in this case, exact optimization methods can only achieve suboptimal solution. In this paper, we applied different meta-heuristic optimization techniques, like Genetic Algorithm (GA, Partial Swarm Optimization (PSO, Bee Algorithm (BA, Tabu Search (TS, Simulated Annealing (SA and Teaching-Learning-Based Optimization (TLBO. They are compared with each other to obtain solution for power allocation in a wiretap wireless network. Although all these techniques find suboptimal solutions, but they appear superlative to exact optimization methods. Finally, we define a Figure of Merit (FOM as a rule of thumb to determine the best meta-heuristic algorithm. This FOM considers quality of solution, number of required iterations to converge, and CPU time.

  10. Physical simulation technique on the behaviour of oil spills in grease ice under wave actions

    International Nuclear Information System (INIS)

    Li, Z.; Hollebone, B.; Fingas, M.; Fieldhouse, B.

    2008-01-01

    Light or medium oil spilled on ice tends to rise and remain the surface in unconsolidated frazil or grease ice. This study looked for a new system for studying the oil emulsion in grease ice under experimental conditions. A physical simulation technique was designed to test the effect of wave energy on the spilled oil grease ice emulsion. The newly developed test system has the ability to perform simulation tests in wave, wave-ice, wave-oil and wave-ice-oil. This paper presented the design concept of the developed test system and introduced the experimental certifications of its capability in terms of temperature control, wave-making and grease ice-making. The key feature of the technique is a mini wave flume which derives its wave making power from an oscillator in a chemical laboratory. Video cameras record the wave action in the flume in order to obtain wave parameters. The wave making capability tests in this study were used to determine the relation of wave height, length and frequency with oscillator power transfer, oscillator frequency and the depth of the water flume. 16 refs., 10 figs

  11. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  12. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  13. Fixed-site physical protection system modeling

    International Nuclear Information System (INIS)

    Chapman, L.D.

    1975-01-01

    An evaluation of a fixed-site safeguard security system must consider the interrelationships of barriers, alarms, on-site and off-site guards, and their effectiveness against a forcible adversary attack whose intention is to create an act of sabotage or theft. A computer model has been developed at Sandia Laboratories for the evaluation of alternative fixed-site security systems. Trade-offs involving on-site and off-site response forces and response times, perimeter alarm systems, barrier configurations, and varying levels of threat can be analyzed. The computer model provides a framework for performing inexpensive experiments on fixed-site security systems for testing alternative decisions, and for determining the relative cost effectiveness associated with these decision policies

  14. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  15. Application of aesthetic, recreational forms of physical culture in the organizational and pedagogical techniques of school physical education

    Directory of Open Access Journals (Sweden)

    Roters T.T.

    2010-06-01

    Full Text Available The results of methodological analysis are presented in relation to introduction of organizational pedagogical technologies of leadthrough of the employments fixed, extracurricular and extracurricular. Directions of increase of interest and motivation of schoolboys are rotined to employments by physical exercises with the use of aesthetically beautiful forms of physical culture. It is indicated that modern organizational pedagogical technologies come forward the determinative of satisfaction of needs and interests of schoolboys. It is set that the aesthetic recreation forms of physical culture render assistance to the increase of interest and motivation to employments by physical exercises.

  16. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  17. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  18. Evaluating nuclear physics inputs in core-collapse supernova models

    Science.gov (United States)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  19. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  20. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    Science.gov (United States)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  1. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  2. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  3. Characterizing, modeling, and addressing gender disparities in introductory college physics

    Science.gov (United States)

    Kost-Smith, Lauren Elizabeth

    2011-12-01

    The underrepresentation and underperformance of females in physics has been well documented and has long concerned policy-makers, educators, and the physics community. In this thesis, we focus on gender disparities in the first- and second-semester introductory, calculus-based physics courses at the University of Colorado. Success in these courses is critical for future study and careers in physics (and other sciences). Using data gathered from roughly 10,000 undergraduate students, we identify and model gender differences in the introductory physics courses in three areas: student performance, retention, and psychological factors. We observe gender differences on several measures in the introductory physics courses: females are less likely to take a high school physics course than males and have lower standardized mathematics test scores; males outscore females on both pre- and post-course conceptual physics surveys and in-class exams; and males have more expert-like attitudes and beliefs about physics than females. These background differences of males and females account for 60% to 70% of the gender gap that we observe on a post-course survey of conceptual physics understanding. In analyzing underlying psychological factors of learning, we find that female students report lower self-confidence related to succeeding in the introductory courses (self-efficacy) and are less likely to report seeing themselves as a "physics person". Students' self-efficacy beliefs are significant predictors of their performance, even when measures of physics and mathematics background are controlled, and account for an additional 10% of the gender gap. Informed by results from these studies, we implemented and tested a psychological, self-affirmation intervention aimed at enhancing female students' performance in Physics 1. Self-affirmation reduced the gender gap in performance on both in-class exams and the post-course conceptual physics survey. Further, the benefit of the self

  4. Exotic smoothness and physics differential topology and spacetime models

    CERN Document Server

    Asselmeyer-Maluga, T

    2007-01-01

    The recent revolution in differential topology related to the discovery of non-standard ("exotic") smoothness structures on topologically trivial manifolds such as R4 suggests many exciting opportunities for applications of potentially deep importance for the spacetime models of theoretical physics, especially general relativity. This rich panoply of new differentiable structures lies in the previously unexplored region between topology and geometry. Just as physical geometry was thought to be trivial before Einstein, physicists have continued to work under the tacit - but now shown to be incorrect - assumption that differentiability is uniquely determined by topology for simple four-manifolds. Since diffeomorphisms are the mathematical models for physical coordinate transformations, Einstein's relativity principle requires that these models be physically inequivalent. This book provides an introductory survey of some of the relevant mathematics and presents preliminary results and suggestions for further app...

  5. Physical plausibility of cold star models satisfying Karmarkar conditions

    Energy Technology Data Exchange (ETDEWEB)

    Fuloria, Pratibha [Kumaun University, Physics Dept., Almora (India); Pant, Neeraj [N.D.A., Maths Dept., Khadakwasla, Pune (India)

    2017-11-15

    In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9. The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models. (orig.)

  6. Physical plausibility of cold star models satisfying Karmarkar conditions

    International Nuclear Information System (INIS)

    Fuloria, Pratibha; Pant, Neeraj

    2017-01-01

    In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9. The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models. (orig.)

  7. Physical plausibility of cold star models satisfying Karmarkar conditions

    Science.gov (United States)

    Fuloria, Pratibha; Pant, Neeraj

    2017-11-01

    In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9 . The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models.

  8. A Continuous Dynamic Traffic Assignment Model From Plate Scanning Technique

    Energy Technology Data Exchange (ETDEWEB)

    Rivas, A.; Gallego, I.; Sanchez-Cambronero, S.; Ruiz-Ripoll, L.; Barba, R.M.

    2016-07-01

    This paper presents a methodology for the dynamic estimation of traffic flows on all links of a network from observable field data assuming the first-in-first-out (FIFO) hypothesis. The traffic flow intensities recorded at the exit of the scanned links are propagated to obtain the flow waves on unscanned links. For that, the model calculates the flow-cost functions through information registered with the plate scanning technique. The model also responds to the concern about the parameter quality of flow-cost functions to replicate the real traffic flow behaviour. It includes a new algorithm for the adjustment of the parameter values to link characteristics when its quality is questionable. For that, it is necessary the a priori study of the location of the scanning devices to identify all path flows and to measure travel times in all links. A synthetic network is used to illustrate the proposed method and to prove its usefulness and feasibility. (Author)

  9. The limitations of mathematical modeling in high school physics education

    Science.gov (United States)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems

  10. Method of modelization assistance with bond graphs and application to qualitative diagnosis of physical systems

    International Nuclear Information System (INIS)

    Lucas, B.

    1994-05-01

    After having recalled the usual diagnosis techniques (failure index, decision tree) and those based on an artificial intelligence approach, the author reports a research aimed at exploring the knowledge and model generation technique. He focuses on the design of an aid to model generation tool and aid-to-diagnosis tool. The bond graph technique is shown to be adapted to the aid to model generation, and is then adapted to the aid to diagnosis. The developed tool is applied to three projects: DIADEME (a diagnosis system based on physical model), the improvement of the SEXTANT diagnosis system (an expert system for transient analysis), and the investigation on an Ariane 5 launcher component. Notably, the author uses the Reiter and Greiner algorithm

  11. Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence

    CERN Document Server

    Hutter, Kolumban

    2004-01-01

    The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.

  12. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  13. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  14. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    Science.gov (United States)

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  15. On the Reliability of Nonlinear Modeling using Enhanced Genetic Programming Techniques

    Science.gov (United States)

    Winkler, S. M.; Affenzeller, M.; Wagner, S.

    The use of genetic programming (GP) in nonlinear system identification enables the automated search for mathematical models that are evolved by an evolutionary process using the principles of selection, crossover and mutation. Due to the stochastic element that is intrinsic to any evolutionary process, GP cannot guarantee the generation of similar or even equal models in each GP process execution; still, if there is a physical model underlying to the data that are analyzed, then GP is expected to find these structures and produce somehow similar results. In this paper we define a function for measuring the syntactic similarity of mathematical models represented as structure trees; using this similarity function we compare the results produced by GP techniques for a data set representing measurement data of a BMW Diesel engine.

  16. Predictive modeling of coupled multi-physics systems: II. Illustrative application to reactor physics

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel; Badea, Madalina Corina

    2014-01-01

    Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner

  17. PREFACE: Physics-Based Mathematical Models for Nanotechnology

    Science.gov (United States)

    Voon, Lok C. Lew Yan; Melnik, Roderick; Willatzen, Morten

    2008-03-01

    stain-resistant clothing, but with thousands more anticipated. The focus of this interdisciplinary workshop was on determining what kind of new theoretical and computational tools will be needed to advance the science and engineering of nanomaterials and nanostructures. Thanks to the stimulating environment of the BIRS, participants of the workshop had plenty of opportunity to exchange new ideas on one of the main topics of this workshop—physics-based mathematical models for the description of low-dimensional semiconductor nanostructures (LDSNs) that are becoming increasingly important in technological innovations. The main objective of the workshop was to bring together some of the world leading experts in the field from each of the key research communities working on different aspects of LDSNs in order to (a) summarize the state-of-the-art models and computational techniques for modeling LDSNs, (b) identify critical problems of major importance that require solution and prioritize them, (c) analyze feasibility of existing mathematical and computational methodologies for the solution of some such problems, and (d) use some of the workshop working sessions to explore promising approaches in addressing identified challenges. With the possibility of growing practically any shape and size of heterostructures, it becomes essential to understand the mathematical properties of quantum-confined structures including properties of bulk states, interface states, and surface states as a function of shape, size, and internal strain. This workshop put strong emphasis on discussions of the new mathematics needed in nanotechnology especially in relation to geometry and material-combination optimization of device properties such as electronic, optical, and magnetic properties. The problems that were addressed at this meeting are of immense importance in determining such quantum-mechanical properties and the group of invited participants covered very well all the relevant disciplines

  18. Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Science.gov (United States)

    Clancy, Thomas C.; Gates, Thomas S.

    2005-01-01

    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

  19. Technical Manual for the SAM Physical Trough Model

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M. J.; Gilman, P.

    2011-06-01

    NREL, in conjunction with Sandia National Lab and the U.S Department of Energy, developed the System Advisor Model (SAM) analysis tool for renewable energy system performance and economic analysis. This paper documents the technical background and engineering formulation for one of SAM's two parabolic trough system models in SAM. The Physical Trough model calculates performance relationships based on physical first principles where possible, allowing the modeler to predict electricity production for a wider range of component geometries than is possible in the Empirical Trough model. This document describes the major parabolic trough plant subsystems in detail including the solar field, power block, thermal storage, piping, auxiliary heating, and control systems. This model makes use of both existing subsystem performance modeling approaches, and new approaches developed specifically for SAM.

  20. Physically representative atomistic modeling of atomic-scale friction

    Science.gov (United States)

    Dong, Yalin

    interesting physical process is buried between the two contact interfaces, thus makes a direct measurement more difficult. Atomistic simulation is able to simulate the process with the dynamic information of each single atom, and therefore provides valuable interpretations for experiments. In this, we will systematically to apply Molecular Dynamics (MD) simulation to optimally model the Atomic Force Microscopy (AFM) measurement of atomic friction. Furthermore, we also employed molecular dynamics simulation to correlate the atomic dynamics with the friction behavior observed in experiments. For instance, ParRep dynamics (an accelerated molecular dynamic technique) is introduced to investigate velocity dependence of atomic friction; we also employ MD simulation to "see" how the reconstruction of gold surface modulates the friction, and the friction enhancement mechanism at a graphite step edge. Atomic stick-slip friction can be treated as a rate process. Instead of running a direction simulation of the process, we can apply transition state theory to predict its property. We will have a rigorous derivation of velocity and temperature dependence of friction based on the Prandtl-Tomlinson model as well as transition theory. A more accurate relation to prediction velocity and temperature dependence is obtained. Furthermore, we have included instrumental noise inherent in AFM measurement to interpret two discoveries in experiments, suppression of friction at low temperature and the attempt frequency discrepancy between AFM measurement and theoretical prediction. We also discuss the possibility to treat wear as a rate process.

  1. Snyder-de Sitter model from two-time physics

    International Nuclear Information System (INIS)

    Carrisi, M. C.; Mignemi, S.

    2010-01-01

    We show that the symplectic structure of the Snyder model on a de Sitter background can be derived from two-time physics in seven dimensions and propose a Hamiltonian for a free particle consistent with the symmetries of the model.

  2. Physical and Model Uncertainty for Fatigue Design of Composite Material

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    The main aim of the present report is to establish stochastic models for the uncertainties related to fatigue design of composite materials. The uncertainties considered are the physical uncertainty related to the static and fatigue strength and the model uncertainty related to Miners rule...

  3. A model for the physical adsorption of atomic hydrogen

    NARCIS (Netherlands)

    Bruch, L.W.; Ruijgrok, Th.W.

    1979-01-01

    The formation of the holding potential of physical adsorption is studied with a model in which a hydrogen atom interacts with a perfectly imaging substrate bounded by a sharp planar surface; the exclusion of the atomic electron from the substrate is an important boundary condition in the model. The

  4. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  5. Rock.XML - Towards a library of rock physics models

    Science.gov (United States)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  6. Some aspects of continuum physics used in fuel pin modeling

    International Nuclear Information System (INIS)

    Bard, F.E.

    1975-06-01

    The mathematical formulation used in fuel pin modeling is described. Fuel pin modeling is not a simple extension of the experimental and interpretative methods used in classical mechanics. New concepts are needed to describe materials in a reactor environment. Some aspects of continuum physics used to develop these new constitutive equations for fuel pins are presented. (U.S.)

  7. The composition-explicit distillation curve technique: Relating chemical analysis and physical properties of complex fluids.

    Science.gov (United States)

    Bruno, Thomas J; Ott, Lisa S; Lovestead, Tara M; Huber, Marcia L

    2010-04-16

    The analysis of complex fluids such as crude oils, fuels, vegetable oils and mixed waste streams poses significant challenges arising primarily from the multiplicity of components, the different properties of the components (polarity, polarizability, etc.) and matrix properties. We have recently introduced an analytical strategy that simplifies many of these analyses, and provides the added potential of linking compositional information with physical property information. This aspect can be used to facilitate equation of state development for the complex fluids. In addition to chemical characterization, the approach provides the ability to calculate thermodynamic properties for such complex heterogeneous streams. The technique is based on the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. The analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. By far, the most widely used analytical technique we have used with the ADC is gas chromatography. This has enabled us to study finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this special issue of the Journal of Chromatography, specifically dedicated to extraction technologies, we describe the essential features of the advanced distillation curve metrology as an analytical strategy for complex fluids. Published by Elsevier B.V.

  8. Engineered Barrier System: Physical and Chemical Environment Model

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley; R. Jarek; P. Mariner

    2004-02-09

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming by deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports.

  9. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  10. Data assimilation techniques and modelling uncertainty in geosciences

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available "You cannot step into the same river twice". Perhaps this ancient quote is the best phrase to describe the dynamic nature of the earth system. If we regard the earth as a several mixed systems, we want to know the state of the system at any time. The state could be time-evolving, complex (such as atmosphere or simple and finding the current state requires complete knowledge of all aspects of the system. On one hand, the Measurements (in situ and satellite data are often with errors and incomplete. On the other hand, the modelling cannot be exact; therefore, the optimal combination of the measurements with the model information is the best choice to estimate the true state of the system. Data assimilation (DA methods are powerful tools to combine observations and a numerical model. Actually, DA is an interaction between uncertainty analysis, physical modelling and mathematical algorithms. DA improves knowledge of the past, present or future system states. DA provides a forecast the state of complex systems and better scientific understanding of calibration, validation, data errors and their probability distributions. Nowadays, the high performance and capabilities of DA have led to extensive use of it in different sciences such as meteorology, oceanography, hydrology and nuclear cores. In this paper, after a brief overview of the DA history and a comparison with conventional statistical methods, investigated the accuracy and computational efficiency of two main classical algorithms of DA involving stochastic DA (BLUE and Kalman filter and variational DA (3D and 4D-Var, then evaluated quantification and modelling of the errors. Finally, some of DA applications in geosciences and the challenges facing the DA are discussed.

  11. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Naets, Frank

    2018-01-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system...... by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we...

  12. A physical data model for fields and agents

    Science.gov (United States)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data

  13. Physical and numerical modeling of Joule-heated melters

    Energy Technology Data Exchange (ETDEWEB)

    Eyler, L.L.; Skarda, R.J.; Crowder, R.S. III; Trent, D.S.; Reid, C.R.; Lessor, D.L.

    1985-10-01

    The Joule-heated ceramic-lined melter is an integral part of the high level waste immobilization process under development by the US Department of Energy. Scaleup and design of this waste glass melting furnace requires an understanding of the relationships between melting cavity design parameters and the furnace performance characteristics such as mixing, heat transfer, and electrical requirements. Developing empirical models of these relationships through actual melter testing with numerous designs would be a very costly and time consuming task. Additionally, the Pacific Northwest Laboratory (PNL) has been developing numerical models that simulate a Joule-heated melter for analyzing melter performance. This report documents the method used and results of this modeling effort. Numerical modeling results are compared with the more conventional, physical modeling results to validate the approach. Also included are the results of numerically simulating an operating research melter at PNL. Physical Joule-heated melters modeling results used for qualiying the simulation capabilities of the melter code included: (1) a melter with a single pair of electrodes and (2) a melter with a dual pair (two pairs) of electrodes. The physical model of the melter having two electrode pairs utilized a configuration with primary and secondary electrodes. The principal melter parameters (the ratio of power applied to each electrode pair, modeling fluid depth, electrode spacing) were varied in nine tests of the physical model during FY85. Code predictions were made for five of these tests. Voltage drops, temperature field data, and electric field data varied in their agreement with the physical modeling results, but in general were judged acceptable. 14 refs., 79 figs., 17 tabs.

  14. Physical and numerical modeling of Joule-heated melters

    International Nuclear Information System (INIS)

    Eyler, L.L.; Skarda, R.J.; Crowder, R.S. III; Trent, D.S.; Reid, C.R.; Lessor, D.L.

    1985-10-01

    The Joule-heated ceramic-lined melter is an integral part of the high level waste immobilization process under development by the US Department of Energy. Scaleup and design of this waste glass melting furnace requires an understanding of the relationships between melting cavity design parameters and the furnace performance characteristics such as mixing, heat transfer, and electrical requirements. Developing empirical models of these relationships through actual melter testing with numerous designs would be a very costly and time consuming task. Additionally, the Pacific Northwest Laboratory (PNL) has been developing numerical models that simulate a Joule-heated melter for analyzing melter performance. This report documents the method used and results of this modeling effort. Numerical modeling results are compared with the more conventional, physical modeling results to validate the approach. Also included are the results of numerically simulating an operating research melter at PNL. Physical Joule-heated melters modeling results used for qualiying the simulation capabilities of the melter code included: (1) a melter with a single pair of electrodes and (2) a melter with a dual pair (two pairs) of electrodes. The physical model of the melter having two electrode pairs utilized a configuration with primary and secondary electrodes. The principal melter parameters (the ratio of power applied to each electrode pair, modeling fluid depth, electrode spacing) were varied in nine tests of the physical model during FY85. Code predictions were made for five of these tests. Voltage drops, temperature field data, and electric field data varied in their agreement with the physical modeling results, but in general were judged acceptable. 14 refs., 79 figs., 17 tabs

  15. New physics beyond the standard model of particle physics and parallel universes

    Energy Technology Data Exchange (ETDEWEB)

    Plaga, R. [Franzstr. 40, 53111 Bonn (Germany)]. E-mail: rainer.plaga@gmx.de

    2006-03-09

    It is shown that if-and only if-'parallel universes' exist, an electroweak vacuum that is expected to have decayed since the big bang with a high probability might exist. It would neither necessarily render our existence unlikely nor could it be observed. In this special case the observation of certain combinations of Higgs-boson and top-quark masses-for which the standard model predicts such a decay-cannot be interpreted as evidence for new physics at low energy scales. The question of whether parallel universes exist is of interest to our understanding of the standard model of particle physics.

  16. Comparison of ethylcellulose matrix characteristics prepared by solid dispersion technique or physical mixing

    Directory of Open Access Journals (Sweden)

    Fatemeh Sadeghi

    2003-07-01

    Full Text Available The characteristics of ethylcellulose matrices prepared from solid dispersion systems were compared with those prepared from physical mixture of drug and polymer. Sodium diclofenac was used as a model drug and the effect of the drug:polymer ratio and the method of matrix production on tablet crushing strength, friability, drug release profile and drug release mechanism were evaluated. The results showed that increasing the polymer content in matrices increased the crushing strengths of tablets. However the friability of tablets was independent of polymer content. Drug release rate was greatly affected by the amount of polymer in the matrices and considerable decrease in release rate was observed by increasing the polymer content. It was also found that the type of mixture used for matrix production had great influence on the tablet crushing strength and drug release rate. Matrices prepared from physical mixtures of drug and polymer was harder than those prepared from solid dispersion systems, but their release rates were considerably faster. This phenomenon was attributed to the encapsulation of drug particles by polymer in matrices prepared from solid dispersion system which caused a great delay in diffusion of the drug through polymer and made diffusion as a rate retarding process in drug release mechanism.

  17. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  18. Female role models in physics education in Ireland

    Science.gov (United States)

    Chormaic, Síle Nic; Fee, Sandra; Tobin, Laura; Hennessy, Tara

    2013-03-01

    In this paper we consider the statistics on undergraduate student representation in Irish universities and look at student numbers in secondary (high) schools in one region in Ireland. There seems to be no significant change in female participation in physics from 2002 to 2011. Additionally, we have studied the influence of an educator's gender on the prevalence of girls studying physics in secondary schools in Co. Louth, Ireland, and at the postgraduate level in Irish universities. It would appear that strong female role models have a positive influence and lead to an increase in girls' participation in physics.

  19. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  20. Proceedings of VII International Symposium on Nuclear and Related Techniques. XIII Workshop on Nuclear Physics. WONP-NURT 2011

    International Nuclear Information System (INIS)

    2011-02-01

    This year the XIII Workshop on Nuclear Physics (WONP) and the VII Symposium on Nuclear and Related Techniques (NURT) are organized jointly, by Instituto Superior de Tecnologias y Ciencias Aplicadas and Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear. Both events gather scientists from several countries with top research work on nuclear physics and its applications. WONP has been carried out since 1994 promoting an ever-exchanging exchange between professionals of various nuclear and applied physics fields, those related to environmental and health care. NURT is one of the key Cuban scientific meetings since 1997 dealing with the peaceful applications of nuclear techniques in several domains of the society. WONP and NURT provide an unique opportunity for the national and international scientific community to meet outstanding researchers and discuss current trends in several areas of theoretical, experimental and applied nuclear physics and related topics. The papers submitted to this event are presented in this CD-ROM

  1. Proceedings of VI International Symposium on Nuclear and Related Techniques. XII Workshop on Nuclear Physics. WONP-NURT 2009

    International Nuclear Information System (INIS)

    2009-02-01

    This year the XII Workshop on Nuclear Physics (WONP) and the VI Symposium on Nuclear and Related Techniques (NURT) are organized jointly, by Instituto Superior de Tecnologias y Ciencias Aplicadas and Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear. Both events gather scientists from several countries with top research work on nuclear physics and its applications. WONP has been carried out since 1994 promoting an ever-exchanging exchange between professionals of various nuclear and applied physics fields, those related to environmental and health care. NURT is one of the key Cuban scientific meetings since 1997 dealing with the peaceful applications of nuclear techniques in several domains of the society. WONP and NURT provide an unique opportunity for the national and international scientific community to meet outstanding researchers and discuss current trends in several areas of theoretical, experimental and applied nuclear physics and related topics. The papers submitted to this event are presented in this CD-ROM

  2. VLF surface-impedance modelling techniques for coal exploration

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.; Thiel, D.; O' Keefe, S. [Central Queensland University, Rockhampton, Qld. (Australia). Faculty of Engineering and Physical Systems

    2000-10-01

    New and efficient computational techniques are required for geophysical investigations of coal. This will allow automated inverse analysis procedures to be used for interpretation of field data. In this paper, a number of methods of modelling electromagnetic surface impedance measurements are reviewed, particularly as applied to typical coal seam geology found in the Bowen Basin. At present, the Impedance method and the finite-difference time-domain (FDTD) method appear to offer viable solutions although both have problems. The Impedance method is currently slightly inaccurate, and the FDTD method has large computational demands. In this paper both methods are described and results are presented for a number of geological targets. 17 refs., 14 figs.

  3. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  4. The Cosmological Standard Model and Its Implications for Beyond the Standard Model of Particle Physics

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    While the cosmological standard model has many notable successes, it assumes 95% of the mass-energy density of the universe is dark and of unknown nature, and there was an early stage of inflationary expansion driven by physics far beyond the range of the particle physics standard model. In the colloquium I will discuss potential particle-physics implications of the standard cosmological model.

  5. Model-Independent and Quasi-Model-Independent Search for New Physics at CDF

    OpenAIRE

    CDF Collaboration

    2007-01-01

    Data collected in Run II of the Fermilab Tevatron are searched for indications of new electroweak scale physics. Rather than focusing on particular new physics scenarios, CDF data are analyzed for discrepancies with respect to the standard model prediction. A model-independent approach (Vista) considers the gross features of the data, and is sensitive to new large cross section physics. A quasi-model-independent approach (Sleuth) searches for a significant excess of events with large summed t...

  6. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  7. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  8. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  9. Physics at the LHC - From Standard Model measurements to Searches for New Physics

    Energy Technology Data Exchange (ETDEWEB)

    Jakobs, Karl [Freiburg University (Germany)

    2014-07-01

    The successful operation of the Large Hadron Collider (LHC) during the past two years allowed to explore particle interaction in a new energy regime. Measurements of important Standard Model processes like the production of high-p{sub T} jets, W and Z bosons and top and b-quarks were performed by the LHC experiments. In addition, the high collision energy allowed to search for new particles in so far unexplored mass regions. Important constraints on the existence of new particles predicted in many models of physics beyond the Standard Model could be established. With integrated luminosities reaching values around 5 fb{sup −1} in 2011, the experiments reached as well sensitivity to probe the existence of the Standard Model Higgs boson over a large mass range. In the present report the major physics results obtained by the two general-purpose experiments ATLAS and CMS are summarized.

  10. A mathematical look at a physical power prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper takes a mathematical look at a physical model used to predict the power produced from wind farms. The reason is to see whether simple mathematical expressions can replace the original equations, and to give guidelines as to where the simplifications can be made and where they can not. This paper shows that there is a linear dependence between the geostrophic wind and the wind at the surface, but also that great care must be taken in the selection of the models since physical dependencies play a very important role, e.g. through the dependence of the turning of the wind on the wind speed.

  11. A mathematical look at a physical power prediction model

    DEFF Research Database (Denmark)

    Landberg, L.

    1998-01-01

    This article takes a mathematical look at a physical model used to predict the power produced from wind farms. The reason is to see whether simple mathematical expressions can replace the original equations and to give guidelines as to where simplifications can be made and where they cannot....... The article shows that there is a linear dependence between the geostrophic wind and the local wind at the surface, but also that great care must be taken in the selection of the simple mathematical models, since physical dependences play a very important role, e.g. through the dependence of the turning...

  12. A Comparison between Physics-based and Polytropic MHD Models for Stellar Coronae and Stellar Winds of Solar Analogs

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, O. [Lowell Center for Space Science and Technology, University of Massachusetts, Lowell, MA 01854 (United States)

    2017-02-01

    The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is a polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.

  13. Nonlinear waves in Bose–Einstein condensates: physical relevance and mathematical techniques

    International Nuclear Information System (INIS)

    Carretero-González, R; Frantzeskakis, D J; Kevrekidis, P G

    2008-01-01

    The aim of this review is to introduce the reader to some of the physical notions and the mathematical methods that are relevant to the study of nonlinear waves in Bose–Einstein condensates (BECs). Upon introducing the general framework, we discuss the prototypical models that are relevant to this setting for different dimensions and different potentials confining the atoms. We analyse some of the model properties and explore their typical wave solutions (plane wave solutions, bright, dark, gap solitons as well as vortices). We then offer a collection of mathematical methods that can be used to understand the existence, stability and dynamics of nonlinear waves in such BECs, either directly or starting from different types of limits (e.g. the linear or the nonlinear limit or the discrete limit of the corresponding equation). Finally, we consider some special topics involving more recent developments, and experimental setups in which there is still considerable need for developing mathematical as well as computational tools. (invited article)

  14. The link between physics and chemistry in track modelling

    International Nuclear Information System (INIS)

    Green, N.J.B.; Bolton, C.E.; Spencer-Smith, R.D.

    1999-01-01

    The physical structure of a radiation track provides the initial conditions for the modelling of radiation chemistry. These initial conditions are not perfectly understood, because there are important gaps between what is provided by a typical track structure model and what is required to start the chemical model. This paper addresses the links between the physics and chemistry of tracks, with the intention of identifying those problems that need to be solved in order to obtain an accurate picture of the initial conditions for the purposes of modelling chemistry. These problems include the reasons for the increased yield of ionisation relative to homolytic bond breaking in comparison with the gas phase. A second area of great importance is the physical behaviour of low-energy electrons in condensed matter (including thermolisation and solvation). Many of these processes are not well understood, but they can have profound effects on the transient chemistry in the track. Several phenomena are discussed, including the short distance between adjacent energy loss events, the molecular nature of the underlying medium, dissociative attachment resonances and the ability of low-energy electrons to excite optically forbidden molecular states. Each of these phenomena has the potential to modify the transient chemistry substantially and must therefore be properly characterised before the physical model of the track can be considered to be complete. (orig.)

  15. Symmetry and the Standard Model mathematics and particle physics

    CERN Document Server

    Robinson, Matthew

    2011-01-01

    While elementary particle physics is an extraordinarily fascinating field, the huge amount of knowledge necessary to perform cutting-edge research poses a formidable challenge for students. The leap from the material contained in the standard graduate course sequence to the frontiers of M-theory, for example, is tremendous. To make substantial contributions to the field, students must first confront a long reading list of texts on quantum field theory, general relativity, gauge theory, particle interactions, conformal field theory, and string theory. Moreover, waves of new mathematics are required at each stage, spanning a broad set of topics including algebra, geometry, topology, and analysis. Symmetry and the Standard Model: Mathematics and Particle Physics, by Matthew Robinson, is the first volume of a series intended to teach math in a way that is catered to physicists. Following a brief review of classical physics at the undergraduate level and a preview of particle physics from an experimentalist's per...

  16. Models for physics of the very small and very large

    CERN Document Server

    Buckholtz, Thomas J

    2016-01-01

    This monograph tackles three challenges. First, show math that matches known elementary particles. Second, apply the math to match other known physics data. Third, predict future physics data The math features solutions to isotropic pairs of isotropic quantum harmonic oscillators. This monograph matches some solutions to known elementary particles. Matched properties include spin and types of interactions in which the particles partake Other solutions point to possible elementary particles This monograph applies the math and the extended particle list. Results narrow gaps between physics data and theory. Results pertain to elementary particles, astrophysics, and cosmology For example, this monograph predicts properties for beyond-the-Standard-Model elementary particles, proposes descriptions of dark matter and dark energy, provides new relationships between known physics constants, includes theory that dovetails with the ratio of dark matter to ordinary matter, includes math that dovetails with the number of ...

  17. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  18. Absorbing systematic effects to obtain a better background model in a search for new physics

    International Nuclear Information System (INIS)

    Caron, S; Horner, S; Sundermann, J E; Cowan, G; Gross, E

    2009-01-01

    This paper presents a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with data is reached. In this way, systematic effects are absorbed into the new background model. The same correction is then applied on the Monte Carlo prediction in the signal region. Comparing this method to other background estimation techniques shows improvements with respect to statistical and systematic uncertainties. The proposed method can also be applied in other fields beyond high energy physics.

  19. Electromagnetic physical modeling. 10; Denji yudoho no model jikken. 10

    Energy Technology Data Exchange (ETDEWEB)

    Noguchi, K; Endo, M; Yoshimori, M [Waseda University, Tokyo (Japan). School of Science and Engineering; Saito, A [Mitsui Mineral Development Engineering Co. Ltd., Tokyo (Japan)

    1996-10-01

    The model experiment of a borehole electromagnetic (EM) method was carried out using the prepared waterproof sensor and materials with conductivity of 10{sup 0}-10{sup 2}S/m as medium. The 2-layered structure ground model was prepared by filling a water tank with saturated brine of nearly 20S/m up to 30cm. Square wave current was sent from an amplifier to a transmitter coil, and electro motive force induced in a receiver coil was measured. Although numerical simulation is widely used for EM method, analog model experiment is also effective. For the receiver coil installed in brine, preventive measures from short-circuit and water were prepared. Electro motive force was measured at receiver intervals of 1cm and at 0-10cm in depth using a bar carbon model immersed in brine by 5cm in depth under resistivity contrast of 1000 times. In addition, to reduce the resistivity contrast between brine and body, the model experiment was carried out using immersed thin metallic sheet structure with conductivity similar to that of ore under resistivity contrast of 250 times. The effect of medium on both models was thus clarified. 4 refs., 10 figs.

  20. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  1. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    International Nuclear Information System (INIS)

    Apostolakis, J; Burkhardt, H; Ivanchenko, V N; Asai, M; Bagulya, A; Grichine, V; Brown, J M C; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Jacquemier, J; Guatelli, S; Incerti, S; Kadri, O; Maire, M; Urban, L; Pandola, L; Sawkey, D; Toshito, T; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed. (paper)

  2. Model-independent and quasi-model-independent search for new physics at CDF

    International Nuclear Information System (INIS)

    Aaltonen, T.; Maki, T.; Mehtala, P.; Orava, R.; Osterberg, K.; Saarikko, H.; van Remortel, N.; Abulencia, A.; Budd, S.; Ciobanu, C. I.; Errede, D.; Errede, S.; Gerberich, H.; Grundler, U.; Junk, T. R.; Kraus, J.; Marino, C. P.; Neubauer, M. S.; Norniella, O.; Pitts, K.

    2008-01-01

    Data collected in run II of the Fermilab Tevatron are searched for indications of new electroweak scale physics. Rather than focusing on particular new physics scenarios, CDF data are analyzed for discrepancies with respect to the standard model prediction. A model-independent approach (Vista) considers the gross features of the data and is sensitive to new large cross section physics. A quasi-model-independent approach (Sleuth) searches for a significant excess of events with large summed transverse momentum and is particularly sensitive to new electroweak scale physics that appears predominantly in one final state. This global search for new physics in over 300 exclusive final states in 927 pb -1 of pp collisions at √(s)=1.96 TeV reveals no such significant indication of physics beyond the standard model.

  3. Comparison of Grid Nudging and Spectral Nudging Techniques for Dynamical Climate Downscaling within the WRF Model

    Science.gov (United States)

    Fan, X.; Chen, L.; Ma, Z.

    2010-12-01

    Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.

  4. Physics Laboratory Investigation of Vocational High School Field Stone and Concrete Construction Techniques in the Central Java Province (Indonesia)

    Science.gov (United States)

    Purwandari, Ristiana Dyah

    2015-01-01

    The investigation aims in this study were to uncover the observations of infrastructures and physics laboratory in vocational high school for Stone and Concrete Construction Techniques Expertise Field or Teknik Konstruksi Batu dan Beton (TKBB)'s in Purwokerto Central Java Province, mapping the Vocational High School or Sekolah Menengah Kejuruan…

  5. Electronic equipment and radioisotope devices developed in the Institute of Physics and Nuclear Techniques in the years 1975-1983

    International Nuclear Information System (INIS)

    1983-01-01

    A short review of 43 devices developed in the Institute of Physics and Nuclear Techniques of Academy of Mining and Metallurgy in the years 1975-1983 is given. 20 radioisotope arrangements, 14 electronic devices, 7 detectors for gas chromatography and X-ray detection as well as 2 arrangements for sample preparation are presented. (author)

  6. Modern elementary particle physics explaining and extending the standard model

    CERN Document Server

    Kane, Gordon

    2017-01-01

    This book is written for students and scientists wanting to learn about the Standard Model of particle physics. Only an introductory course knowledge about quantum theory is needed. The text provides a pedagogical description of the theory, and incorporates the recent Higgs boson and top quark discoveries. With its clear and engaging style, this new edition retains its essential simplicity. Long and detailed calculations are replaced by simple approximate ones. It includes introductions to accelerators, colliders, and detectors, and several main experimental tests of the Standard Model are explained. Descriptions of some well-motivated extensions of the Standard Model prepare the reader for new developments. It emphasizes the concepts of gauge theories and Higgs physics, electroweak unification and symmetry breaking, and how force strengths vary with energy, providing a solid foundation for those working in the field, and for those who simply want to learn about the Standard Model.

  7. NATO Advanced Study Institute on Advanced Physical Oceanographic Numerical Modelling

    CERN Document Server

    1986-01-01

    This book is a direct result of the NATO Advanced Study Institute held in Banyuls-sur-mer, France, June 1985. The Institute had the same title as this book. It was held at Laboratoire Arago. Eighty lecturers and students from almost all NATO countries attended. The purpose was to review the state of the art of physical oceanographic numerical modelling including the parameterization of physical processes. This book represents a cross-section of the lectures presented at the ASI. It covers elementary mathematical aspects through large scale practical aspects of ocean circulation calculations. It does not encompass every facet of the science of oceanographic modelling. We have, however, captured most of the essence of mesoscale and large-scale ocean modelling for blue water and shallow seas. There have been considerable advances in modelling coastal circulation which are not included. The methods section does not include important material on phase and group velocity errors, selection of grid structures, advanc...

  8. Rock physics model of glauconitic greensand from the North Sea

    DEFF Research Database (Denmark)

    Hossain, Zakir; Mukerji, Tapan; Dvorkin, Jack

    2011-01-01

    . Results of rock-physics modeling and thin-section observations indicate that variations in the elastic properties of greensand can be explained by two main diagenetic phases: silica cementation and berthierine cementation. These diagenetic phases dominate the elastic properties of greensand reservoir......-stiff-sand or a stiff-sand model. Berthierine cement has different growth patterns in different parts of the greensand, resulting in a soft-sand model and an intermediate-stiff-sand model. © 2012 Society of Exploration Geophysicists....

  9. Investigation of some physical properties of ZnO nanofilms synthesized by micro-droplet technique

    Directory of Open Access Journals (Sweden)

    N. Hamzaoui

    Full Text Available In this paper, ZnO nanocrystals were synthesized using a simple micro-droplets technique from a solution prepared by dissolving zinc acetate di-hydrate [Zn(CH3COO2, 2H2O] in methanol. Microdroplets were deposited on glass substrates heated at 100 °C, the obtained samples of ZnO films were investigated by XRD, AES, AFM, ellipsometry and PL. XRD patterns reveal the wurtzite structure of ZnO where the lattice parameters a and c, calculated from XRD signals, show a nanometric character of ZnO nanoparticles. The chemical composition of ZnO film surfaces was verified by Auger electron spectroscopy (AES. From Auger signals, oxygen (O-KLL and zinc (Zn-LMM Auger transitions indicate well the presence of Zn-O bonding. The surface topography of the samples was measured by atomic force microscopy (AFM where ZnO nanoparticles of average size ranging between 20 and 80 nm were determined. Some optical properties as dielectric constants, refractive index, extinction coefficient as well as the optical band gap were determined from ellipsometry analysis. The dispersion of the refractive index was discussed in terms of both Cauchy parameters and Wemple & Di-Dominico single oscillator model. The photoluminescence (PL measurements exhibited two emission peaks. The first at 338 nm, corresponding to the band gap of ZnO, is due to excitonic emission while the second around 400 nm, is attributed to the single ionized oxygen vacancies. Keywords: ZnO nanoparticles, Micro droplets technique, AFM, Auger spectroscopy, Ellipsometry, Photoluminescence (PL

  10. Structure and physical properties of bio membranes and model membranes

    International Nuclear Information System (INIS)

    Tibor Hianik

    2006-01-01

    Bio membranes belong to the most important structures of the cell and the cell organelles. They play not only structural role of the barrier separating the external and internal part of the membrane but contain also various functional molecules, like receptors, ionic channels, carriers and enzymes. The cell membrane also preserves non-equilibrium state in a cell which is crucial for maintaining its excitability and other signaling functions. The growing interest to the bio membranes is also due to their unique physical properties. From physical point of view the bio membranes, that are composed of lipid bilayer into which are incorporated integral proteins and on their surface are anchored peripheral proteins and polysaccharides, represent liquid s crystal of smectic type. The bio membranes are characterized by anisotropy of structural and physical properties. The complex structure of bio membranes makes the study of their physical properties rather difficult. Therefore several model systems that mimic the structure of bio membranes were developed. Among them the lipid monolayers at an air-water interphase, bilayer lipid membranes, supported bilayer lipid membranes and liposomes are most known. This work is focused on the introduction into the physical word of the bio membranes and their models. After introduction to the membrane structure and the history of its establishment, the physical properties of the bio membranes and their models are stepwise presented. The most focus is on the properties of lipid monolayers, bilayer lipid membranes, supported bilayer lipid membranes and liposomes that were most detailed studied. This lecture has tutorial character that may be useful for undergraduate and graduate students in the area of biophysics, biochemistry, molecular biology and bioengineering, however it contains also original work of the author and his co-worker and PhD students, that may be useful also for specialists working in the field of bio membranes and model

  11. Plasma physics modeling and the Cray-2 multiprocessor

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-01-01

    The importance of computer modeling in the magnetic fusion energy research program is discussed. The need for the most advanced supercomputers is described. To meet the demand for more powerful scientific computers to solve larger and more complicated problems, the computer industry is developing multiprocessors. The role of the Cray-2 in plasma physics modeling is discussed with some examples. 28 refs., 2 figs., 1 tab

  12. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  13. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  14. Weak interactions physics: from its birth to the eletroweak model

    International Nuclear Information System (INIS)

    Lopes, J.L.

    1987-01-01

    A review of the evolution of weak interaction physics from its beginning (Fermi-Majorana-Perrin) to the eletroweak model (Glashow-Weinberg-Salam). Contributions from Brazilian physicists are specially mentioned as well as the first prediction of electroweak-unification, of the neutral intermediate vector Z 0 and the first approximate value of the mass of the W-bosons. (Author) [pt

  15. Measuring damage in physical model tests of rubble mounds

    NARCIS (Netherlands)

    Hofland, B.; Rosa-Santos, Paulo; Taveira-Pinto, Francisco; Lemos, Rute; Mendonça, A.; Juana Fortes, C

    2017-01-01

    This paper studies novel ways to evaluate armour damage in physical models of coastal structures. High-resolution damage data for reference rubble mound breakwaters obtained under the HYDRALAB+ joint-research project are analysed and discussed. These tests are used to analyse the way to describe

  16. Physical and numerical modelling of low mach number compressible flows

    International Nuclear Information System (INIS)

    Paillerre, H.; Clerc, S.; Dabbene, F.; Cueto, O.

    1999-01-01

    This article reviews various physical models that may be used to describe compressible flow at low Mach numbers, as well as the numerical methods developed at DRN to discretize the different systems of equations. A selection of thermal-hydraulic applications illustrate the need to take into account compressibility and multidimensional effects as well as variable flow properties. (authors)

  17. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  18. PHYSICAL AND NUMERICAL MODELING OF ASD EXHAUST DISPERSION AROUND HOUSES

    Science.gov (United States)

    The report discusses the use of a wind tunnel to physically model the dispersion of exhaust plumes from active soil depressurization (ASD) radon mitigation systems in houses. he testing studied the effects of exhaust location (grade level vs. above the eave), as house height, roo...

  19. On Practising in Physical Education: Outline for a Pedagogical Model

    Science.gov (United States)

    Aggerholm, K.; Standal, O.; Barker, D. M.; Larsson, H.

    2018-01-01

    Background: Models-based approaches to physical education have in recent years developed as a way for teachers and students to concentrate on a manageable number of learning objectives, and align pedagogical approaches with learning subject matter and context. This paper draws on Hannah Arendt's account of "vita activa" to map existing…

  20. Particle dark matter from physics beyond the standard model

    International Nuclear Information System (INIS)

    Matchev, Konstantin

    2004-01-01

    In this talk I contrast three different particle dark matter candidates, all motivated by new physics beyond the Standard Model: supersymmetric dark matter, Kaluza-Klein dark matter, and scalar dark matter. I then discuss the prospects for their discovery and identification in both direct detection as well as collider experiments

  1. Digital image technology and a measurement tool in physical models

    CSIR Research Space (South Africa)

    Phelp, David

    2006-05-01

    Full Text Available Advances in digital image technology has allowed us to use accurate, but relatively cost effective technology to measure a number of varied activities in physical models. The capturing and manipulation of high resolution digital images can be used...

  2. Speedminton: Using the Tactical Games Model in Secondary Physical Education

    Science.gov (United States)

    Oh, Hyun-Ju; Bullard, Susan; Hovatter, Rhonda

    2011-01-01

    Teaching and learning of sport and sports-related games dominates the curriculum in most secondary physical education programs in America. For many secondary school students, playing games can be exciting and lead to a lifetime of participation in sport-related activities. Using the Tactical Games Model (TGM) (Mitchell et al., 2006) to teach the…

  3. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    Science.gov (United States)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  4. Physical-Socio-Economic Modeling of Climate Change

    Science.gov (United States)

    Chamberlain, R. G.; Vatan, F.

    2008-12-01

    Because of the global nature of climate change, any assessment of the effects of plans, policies, and response to climate change demands a model that encompasses the entire Earth System, including socio- economic factors. Physics-based climate models of the factors that drive global temperatures, rainfall patterns, and sea level are necessary but not sufficient to guide decision making. Actions taken by farmers, industrialists, environmentalists, politicians, and other policy makers may result in large changes to economic factors, international relations, food production, disease vectors, and beyond. These consequences will not be felt uniformly around the globe or even across a given region. Policy models must comprehend all of these considerations. Combining physics-based models of the Earth's climate and biosphere with societal models of population dynamics, economics, and politics is a grand challenge with high stakes. We propose to leverage our recent advances in modeling and simulation of military stability and reconstruction operations to models that address all these areas of concern. Following over twenty years' experience of successful combat simulation, JPL has started developing Minerva, which will add demographic, economic, political, and media/information models to capabilities that already exist. With these new models, for which we have design concepts, it will be possible to address a very wide range of potential national and international problems that were previously inaccessible. Our climate change model builds on Minerva and expands the geographical horizon from playboxes containing regions and neighborhoods to the entire globe. This system consists of a collection of interacting simulation models that specialize in different aspects of the global situation. They will each contribute to and draw from a pool of shared data. The basic models are: the physical model; the demographic model; the political model; the economic model; and the media

  5. Physical modelling of flow and dispersion over complex terrain

    Science.gov (United States)

    Cermak, J. E.

    1984-09-01

    Atmospheric motion and dispersion over topography characterized by irregular (or regular) hill-valley or mountain-valley distributions are strongly dependent upon three general sets of variables. These are variables that describe topographic geometry, synoptic-scale winds and surface-air temperature distributions. In addition, pollutant concentration distributions also depend upon location and physical characteristics of the pollutant source. Overall fluid-flow complexity and variability from site to site have stimulated the development and use of physical modelling for determination of flow and dispersion in many wind-engineering applications. Models with length scales as small as 1:12,000 have been placed in boundary-layer wind tunnels to study flows in which forced convection by synoptic winds is of primary significance. Flows driven primarily by forces arising from temperature differences (gravitational or free convection) have been investigated by small-scale physical models placed in an isolated space (gravitational convection chamber). Similarity criteria and facilities for both forced and gravitational-convection flow studies are discussed. Forced-convection modelling is illustrated by application to dispersion of air pollutants by unstable flow near a paper mill in the state of Maryland and by stable flow over Point Arguello, California. Gravitational-convection modelling is demonstrated by a study of drainage flow and pollutant transport from a proposed mining operation in the Rocky Mountains of Colorado. Other studies in which field data are available for comparison with model data are reviewed.

  6. Application of physical scaling towards downscaling climate model precipitation data

    Science.gov (United States)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  7. Rock-physics modelling of the North Sea greensand

    DEFF Research Database (Denmark)

    Hossain, Zakir

    cemented, whereas Ty Formation is characterized by microcrystalline quartz cement. A series of laboratory experiments including core analysis, capillary pressure measurements, NMR T2 measurements, acoustic velocity measurements, electrical properties measurements and CO2 injection experiments were done...... cementation and berthierine cementation. Initially greensand is a mixture of mainly quartz and glauconite; when weakly cemented, it has relatively low elastic modulus and can be modeled by a Hertz-Mindlin contact model of two types of grains. Silica-cemented greensand has a relatively high elastic modulus...... and can be modeled by an intermediate-stiff-sand or a stiff-sand model. Berthierine cement has a different growth patterns in different part of the greensand, resulting in a soft-sand model and an intermediate-stiff-sand model. The second rock-physical model predicts Vp-Vs relations and AVO of a greensand...

  8. PHYSICS

    CERN Multimedia

    Joe Incandela

    There have been two plenary physics meetings since the December CMS week. The year started with two workshops, one on the measurements of the Standard Model necessary for “discovery physics” as well as one on the Physics Analysis Toolkit (PAT). Meanwhile the tail of the “2007 analyses” is going through the last steps of approval. It is expected that by the end of January all analyses will have converted to using the data from CSA07 – which include the effects of miscalibration and misalignment. January Physics Days The first Physics Days of 2008 took place on January 22-24. The first two days were devoted to comprehensive re¬ports from the Detector Performance Groups (DPG) and Physics Objects Groups (POG) on their planning and readiness for early data-taking followed by approvals of several recent studies. Highlights of POG presentations are included below while the activities of the DPGs are covered elsewhere in this bulletin. January 24th was devo...

  9. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-01-01

    Full Text Available Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire (n = 486 distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  10. Neighborhood Design, Physical Activity, and Wellbeing: Applying the Walkability Model.

    Science.gov (United States)

    Zuniga-Teran, Adriana A; Orr, Barron J; Gimblett, Randy H; Chalfoun, Nader V; Guertin, David P; Marsh, Stuart E

    2017-01-13

    Neighborhood design affects lifestyle physical activity, and ultimately human wellbeing. There are, however, a limited number of studies that examine neighborhood design types. In this research, we examine four types of neighborhood designs: traditional development, suburban development, enclosed community, and cluster housing development, and assess their level of walkability and their effects on physical activity and wellbeing. We examine significant associations through a questionnaire ( n = 486) distributed in Tucson, Arizona using the Walkability Model. Among the tested neighborhood design types, traditional development showed significant associations and the highest value for walkability, as well as for each of the two types of walking (recreation and transportation) representing physical activity. Suburban development showed significant associations and the highest mean values for mental health and wellbeing. Cluster housing showed significant associations and the highest mean value for social interactions with neighbors and for perceived safety from crime. Enclosed community did not obtain the highest means for any wellbeing benefit. The Walkability Model proved useful in identifying the walkability categories associated with physical activity and perceived crime. For example, the experience category was strongly and inversely associated with perceived crime. This study provides empirical evidence of the importance of including vegetation, particularly trees, throughout neighborhoods in order to increase physical activity and wellbeing. Likewise, the results suggest that regular maintenance is an important strategy to improve mental health and overall wellbeing in cities.

  11. Nuclear physics aspects in the parton model of Feynman

    International Nuclear Information System (INIS)

    Pauchy Hwang, W.Y.

    1995-01-01

    The basic fact that pions couple strongly to nucleons has dominated various nuclear physics thinkings since the birth of the field more than sixty years ago. The parton model of Feynman, in which the structure of a nucleon (or a hadron) is characterized by a set of parton distributions, was proposed originally in late 1960's to treat high energy deep inelastic scattering, and later many other high energy physics experiments involving hadrons. Introduction of the concept of parton distributions signifies the departure of particle physics from nuclear physics. Following the suggestion that the sea quark distributions in a nucleon, at low and moderate Q 2 (at least up to a few GeV 2 ), can be attributed primarily to the probability of finding such quarks or antiquarks in the mesons (or recoiling baryons) associated with the nucleon, the author examines how nuclear physics aspects offer quantitative understanding of several recent experimental results, including the observed violation of the Gotfried sum rule and the so-called open-quotes proton spin crisisclose quotes. These results suggest that determination of parton distributions of a hadron at Q 2 of a few GeV 2 (and at small x) must in general take into account nuclear physics aspects. Implication of these results for other high-energy reactions, such as semi-inclusive hadron production in deep inelastic scattering, are also discussed

  12. PHYSICS

    CERN Multimedia

    Guenther Dissertori

    The time period between the last CMS week and this June was one of intense activity with numerous get-together targeted at addressing specific issues on the road to data-taking. The two series of workshops, namely the “En route to discoveries” series and the “Vertical Integration” meetings continued.   The first meeting of the “En route to discoveries” sequence (end 2007) had covered the measurements of the Standard Model signals as necessary prerequisite to any claim of signals beyond the Standard Model. The second meeting took place during the Feb CMS week and concentrated on the commissioning of the Physics Objects, whereas the third occurred during the April Physics Week – and this time the theme was the strategy for key new physics signatures. Both of these workshops are summarized below. The vertical integration meetings also continued, with two DPG-physics get-togethers on jets and missing ET and on electrons and photons. ...

  13. PHYSICS

    CERN Multimedia

    Chris Hill

    2012-01-01

    The months that have passed since the last CMS Bulletin have been a very busy and exciting time for CMS physics. We have gone from observing the very first 8TeV collisions produced by the LHC to collecting a dataset of the collisions that already exceeds that recorded in all of 2011. All in just a few months! Meanwhile, the analysis of the 2011 dataset and publication of the subsequent results has continued. These results come from all the PAGs in CMS, including searches for the Higgs boson and other new phenomena, that have set the most stringent limits on an ever increasing number of models of physics beyond the Standard Model including dark matter, Supersymmetry, and TeV-scale gravity scenarios, top-quark physics where CMS has overtaken the Tevatron in the precision of some measurements, and bottom-quark physics where CMS made its first discovery of a new particle, the Ξ*0b baryon (candidate event pictured below). Image 2:  A Ξ*0b candidate event At the same time POGs and PAGs...

  14. Physical injury assessment of male versus female chiropractic students when learning and performing various adjustive techniques: a preliminary investigative study

    Directory of Open Access Journals (Sweden)

    Huber Laura L

    2006-08-01

    Full Text Available Abstract Background Reports of musculoskeletal injuries that some chiropractic students experienced while in the role of adjustor became increasingly evident and developed into the basis of this study. The main objective of this study was to survey a select student population and identify, by gender, the specific types of musculoskeletal injuries they experienced when learning adjustive techniques in the classroom, and performing them in the clinical setting. Methods A survey was developed to record musculoskeletal injuries that students reported to have sustained while practicing chiropractic adjustment set-ups and while delivering adjustments. The survey was modeled from similar instruments used in the university's clinic as well as those used in professional practice. Stratified sampling was used to obtain participants for the study. Data reported the anatomical areas of injury, adjustive technique utilized, the type of injury received, and the recovery time from sustained injuries. The survey also inquired as to the type and area of any past physical injuries as well as the mechanism(s of injury. Results Data obtained from the study identified injuries of the shoulder, wrist, elbow, neck, low back, and mid-back. The low back was the most common injury site reported by females, and the neck was the most common site reported by males. The reported wrist injuries in both genders were 1% male complaints and 17% female complaints. A total of 13% of female respondents reported shoulder injuries, whereas less than 1% of male respondents indicated similar complaints. Conclusion The data collected from the project indicated that obtaining further information on the subject would be worthwhile, and could provide an integral step toward developing methods of behavior modification in an attempt to reduce and/or prevent the incidence of musculoskeletal injuries.

  15. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  16. Modelling Question Difficulty in an A Level Physics Examination

    Science.gov (United States)

    Crisp, Victoria; Grayson, Rebecca

    2013-01-01

    "Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…

  17. The use of production management techniques in the construction of a large high energy physics detector

    International Nuclear Information System (INIS)

    Murray, St.

    2000-01-01

    The lifetime of a particle detector can be divided into 4 periods: designing, engineering drawing, manufacturing and operation-maintenance. Some computer systems and software deal with the management of the designing and engineering drawing, others deal with data acquisition and processing, but no systems exist for the manufacturing phase where particular solutions are proposed every time a problem arises. New-generation detectors that are to be set for the experiments at the future LHC (large hadron collider) will be 10 times more complex than the today detectors and be expected to operate for around 15 years. This complexity associated with a longer operating life implies an efficient and evolutive information system to handle the manufacturing phase. In this work the author has applied the techniques of production management to the manufacturing phase of the compact muon solenoid (CMS), he assigned 3 purposes: 1) to propose a modelling of the description of each component and of its manufacturing process, 2) to develop a graphic software allowing to view the assembling of the different components of the detector, and 3) to evaluate the resource of production of such a device, it means to balance demand with capacity and to optimize the use of production means. (A.C.)

  18. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  19. Physical modeling of spent-nuclear-fuel container

    Directory of Open Access Journals (Sweden)

    Wang Liping

    2012-11-01

    Full Text Available A new physical simulation model was developed to simulate the casting process of the ductile iron heavy section spent-nuclear-fuel container. In this physical simulation model, a heating unit with DR24 Fe-Cr-Al heating wires was used to compensate the heat loss across the non-natural surfaces of the sample, and a precise and reliable casting temperature controlling/monitoring system was employed to ensure the thermal behavior of the simulated casting to be similar to the actual casting. Also, a mould system was designed, in which changeable mould materials can be used for both the outside and inside moulds for different applications. The casting test was carried out with the designed mould and the cooling curves of central and edge points at different isothermal planes of the casting were obtained. Results show that for most isothermal planes, the temperature control system can keep the temperature differences within 6 ℃ between the edge points and the corresponding center points, indicating that this new physical simulation model has high simulation accuracy, and the mould developed can be used for optimization of casting parameters of spent-nuclear-fuel container, such as composition of ductile iron, the pouring temperature, the selection of mould material and design of cooling system. In addition, to maintain the spheroidalization of the ductile iron, the force-chilling should be used for the current physical simulation to ensure the solidification of casting in less than 2 h.

  20. Undergraduate students’ challenges with computational modelling in physics

    Directory of Open Access Journals (Sweden)

    Simen A. Sørby

    2012-12-01

    Full Text Available In later years, computational perspectives have become essential parts in several of the University of Oslo’s natural science studies. In this paper we discuss some main findings from a qualitative study of the computational perspectives’ impact on the students’ work with their first course in physics– mechanics – and their learning and meaning making of its contents. Discussions of the students’ learning of physics are based on sociocultural theory, which originates in Vygotsky and Bakhtin, and subsequent physics education research. Results imply that the greatest challenge for students when working with computational assignments is to combine knowledge from previously known, but separate contexts. Integrating knowledge of informatics, numerical and analytical mathematics and conceptual understanding of physics appears as a clear challenge for the students. We also observe alack of awareness concerning the limitations of physical modelling. The students need help with identifying the appropriate knowledge system or “tool set”, for the different tasks at hand; they need helpto create a plan for their modelling and to become aware of its limits. In light of this, we propose thatan instructive and dialogic text as basis for the exercises, in which the emphasis is on specification, clarification and elaboration, would be of potential great aid for students who are new to computational modelling.

  1. Model Independent Search For New Physics At The Tevatron

    Energy Technology Data Exchange (ETDEWEB)

    Choudalakis, Georgios [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2008-04-01

    The Standard Model of elementary particles can not be the final theory. There are theoretical reasons to expect the appearance of new physics, possibly at the energy scale of few TeV. Several possible theories of new physics have been proposed, each with unknown probability to be confirmed. Instead of arbitrarily choosing to examine one of those theories, this thesis is about searching for any sign of new physics in a model-independent way. This search is performed at the Collider Detector at Fermilab (CDF). The Standard Model prediction is implemented in all final states simultaneously, and an array of statistical probes is employed to search for significant discrepancies between data and prediction. The probes are sensitive to overall population discrepancies, shape disagreements in distributions of kinematic quantities of final particles, excesses of events of large total transverse momentum, and local excesses of data expected from resonances due to new massive particles. The result of this search, first in 1 fb-1 and then in 2 fb-1, is null, namely no considerable evidence of new physics was found.

  2. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    in the estimated/predicted property values, how to assess the quality and reliability of the estimated/predicted property values? The paper will review a class of models for prediction of physical and thermodynamic properties of organic chemicals and their mixtures based on the combined group contribution – atom......Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...

  3. TU Electric reactor physics model verification: Power reactor benchmark

    International Nuclear Information System (INIS)

    Willingham, C.E.; Killgore, M.R.

    1988-01-01

    Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors

  4. The strong interactions beyond the standard model of particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bergner, Georg [Muenster Univ. (Germany). Inst. for Theoretical Physics

    2016-11-01

    SuperMUC is one of the most convenient high performance machines for our project since it offers a high performance and flexibility regarding different applications. This is of particular importance for investigations of new theories, where on the one hand the parameters and systematic uncertainties have to be estimated in smaller simulations and on the other hand a large computational performance is needed for the estimations of the scale at zero temperature. Our project is just the first investigation of the new physics beyond the standard model of particle physics and we hope to proceed with our studies towards more involved Technicolour candidates, supersymmetric QCD, and extended supersymmetry.

  5. Detecting physics beyond the Standard Model with the REDTOP experiment

    Science.gov (United States)

    González, D.; León, D.; Fabela, B.; Pedraza, M. I.

    2017-10-01

    REDTOP is an experiment at its proposal stage. It belongs to the High Intensity class of experiments. REDTOP will use a 1.8 GeV continuous proton beam impinging on a fixed target. It is expected to produce about 1013 η mesons per year. The main goal of REDTOP is to look for physics beyond the Standard Model by detecting rare η decays. The detector is designed with innovative technologies based on the detection of prompt Cherenkov light, such that interesting events can be observed and the background events are efficiently rejected. The experimental design, the physics program and the running plan of the experiment is presented.

  6. Future high precision experiments and new physics beyond Standard Model

    International Nuclear Information System (INIS)

    Luo, Mingxing.

    1993-01-01

    High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here

  7. Noise stabilization effects in models of interdisciplinary physics

    International Nuclear Information System (INIS)

    Spagnolo, B; Augello, G; Caldara, P; Fiasconaro, A; La Cognata, A; Pizzolato, N; Valenti, D; Dubkov, A A; Pankratov, A L

    2009-01-01

    Metastability is a generic feature of many nonlinear systems, and the problem of the lifetime of metastable states involves fundamental aspects of nonequilibrium statistical mechanics. The investigation of noise-induced phenomena in far from equilibrium systems is one of the approaches used to understand the behaviour of physical and biological complex systems. The enhancement of the lifetime of metastable states through the noise enhanced stability effect and the role played by the resonant activation phenomenon will be discussed in models of interdisciplinary physics: (i) polymer translocation dynamics; (ii) transient regime of FitzHugh-Nagumo model; (iii) market stability in a nonlinear Heston model; (iv) dynamics of Josephson junctions; (v) metastability in a quantum bitable system.

  8. Systems and models with anticipation in physics and its applications

    International Nuclear Information System (INIS)

    Makarenko, A

    2012-01-01

    Investigations of recent physics processes and real applications of models require the new more and more improved models which should involved new properties. One of such properties is anticipation (that is taking into accounting some advanced effects).It is considered the special kind of advanced systems – namely a strong anticipatory systems introduced by D. Dubois. Some definitions, examples and peculiarities of solutions are described. The main feature is presumable multivaluedness of the solutions. Presumable physical examples of such systems are proposed: self-organization problems; dynamical chaos; synchronization; advanced potentials; structures in micro-, meso- and macro- levels; cellular automata; computing; neural network theory. Also some applications for modeling social, economical, technical and natural systems are described.

  9. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  10. Physical model and calculation code for fuel coolant interactions

    International Nuclear Information System (INIS)

    Goldammer, H.; Kottowski, H.

    1976-01-01

    A physical model is proposed to describe fuel coolant interactions in shock-tube geometry. According to the experimental results, an interaction model which divides each cycle into three phases is proposed. The first phase is the fuel-coolant-contact, the second one is the ejection and recently of the coolant, and the third phase is the impact and fragmentation. Physical background of these phases are illustrated in the first part of this paper. Mathematical expressions of the model are exposed in the second part. A principal feature of the computational method is the consistent application of the fourier-equation throughout the whole interaction process. The results of some calculations, performed for different conditions are compiled in attached figures. (Aoki, K.)

  11. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  12. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  13. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  14. Utilisation of transparent synthetic soil surrogates in geotechnical physical models: A review

    Directory of Open Access Journals (Sweden)

    Abideen Adekunle Ganiyu

    2016-08-01

    Full Text Available Efforts to obtain non-intrusive measurement of deformations and spatial flow within soil mass prior to the advent of transparent soils have perceptible limitations. The transparent soil is a two-phase medium composed of both the synthetic aggregate and fluid components of identical refractive indices aiming at attaining transparency of the resulting soil. The transparency facilitates real life visualisation of soil continuum in physical models. When applied in conjunction with advanced photogrammetry and image processing techniques, transparent soils enable the quantification of the spatial deformation, displacement and multi-phase flow in physical model tests. Transparent synthetic soils have been successfully employed in geotechnical model tests as soil surrogates based on the testing results of their geotechnical properties which replicate those of natural soils. This paper presents a review on transparent synthetic soils and their numerous applications in geotechnical physical models. The properties of the aggregate materials are outlined and the features of the various transparent clays and sands available in the literature are described. The merits of transparent soil are highlighted and the need to amplify its application in geotechnical physical model researches is emphasised. This paper will serve as a concise compendium on the subject of transparent soils for future researchers in this field.

  15. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bonvini, Marco [Whisker Labs, Oakland, CA (United States); Piette, Mary Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Page, Janie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lin, Guanjing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hu, R. Lilly [Univ. of California, Berkeley, CA (United States)

    2017-08-11

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and building behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.

  16. Physics Bus: An Innovative Model for Public Engagement

    Science.gov (United States)

    Fox, Claire

    The Physics Bus is about doing science for fun. It is an innovative model for science outreach whose mission is to awaken joy and excitement in physics for all ages and walks of life - especially those underserved by science enrichment. It is a mobile exhibition of upcycled appliances-reimagined by kids-that showcase captivating physics phenomena. Inside our spaceship-themed school bus, visitors will find: a microwave ionized-gas disco-party, fog rings that shoot from a wheelbarrow tire, a tv whose electron beam is controlled by a toy keyboard, and over 20 other themed exhibits. The Physics Bus serves a wide range of public in diverse locations from local neighborhoods, urban parks and rural schools, to cross-country destinations. Its approachable, friendly and relaxed environment allows for self-paced and self-directed interactions, providing a positive and engaging experience with science. We believe that this environment enriches lives and inspires people. In this presentation we will talk about the nuts and bolts that make this model work, how the project got started, and the resources that keep it going. We will talk about the advantages of being a grassroots and community-based organization, and how programs like this can best interface with universities. We will explain the benefits of focusing on direct interactions and why our model avoids ``teaching'' physics content with words. Situating our approach within a body of research on the value of informal science we will discuss our success in capturing and engaging our audience. By the end of this presentation we hope to broaden your perception of what makes a successful outreach program and encourage you to value and support alternative outreach models such as this one. In Collaboration with: Eva Luna, Cornell University; Erik Herman, Cornell University; Christopher Bell, Ithaca City School District.

  17. Integrated modelling of physical, chemical and biological weather

    DEFF Research Database (Denmark)

    Kurganskiy, Alexander

    . This is an online-coupled meteorology-chemistry model where chemical constituents and different types of aerosols are an integrated part of the dynamical model, i.e., these constituents are transported in the same way as, e.g., water vapor and cloud water, and, at the same time, the aerosols can interactively...... impact radiation and cloud micro-physics. The birch pollen modelling study has been performed for domains covering Europe and western Russia. Verification of the simulated birch pollen concentrations against in-situ observations showed good agreement obtaining the best score for two Danish sites...

  18. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  19. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  20. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    Science.gov (United States)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by