WorldWideScience

Sample records for rigorous numerical test

  1. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    Science.gov (United States)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  2. Rigor or mortis: best practices for preclinical research in neuroscience.

    Science.gov (United States)

    Steward, Oswald; Balice-Gordon, Rita

    2014-11-05

    Numerous recent reports document a lack of reproducibility of preclinical studies, raising concerns about potential lack of rigor. Examples of lack of rigor have been extensively documented and proposals for practices to improve rigor are appearing. Here, we discuss some of the details and implications of previously proposed best practices and consider some new ones, focusing on preclinical studies relevant to human neurological and psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Structural performance of an IP2 package in free drop test conditions: Numerical and experimental evaluations

    International Nuclear Information System (INIS)

    Lo Frano, Rosa; Pugliese, Giovanni; Nasta, Marco

    2014-01-01

    Highlights: • Vertical free drop test. • Qualification of an IP2 type Italian packaging. • Numerical and experimental investigation of the package integrity. • Demonstration the Italian packaging meets safety requirements. - Abstract: The casks or packaging systems used for the transportation of nuclear materials, especially spent fuel elements, have to be designed according to rigorous acceptance requirements, like the IAEA ones, in order to provide protection to human beings and environment against radiation exposure and contamination. This study deals with the free drop test of an Italian design packaging system to be used for the transportation of low and intermediate level radioactive wastes. Impact drop experiments were performed in the Lab. Scalbatraio of the DICI – University of Pisa. Dynamic analyses too have been carried out, by refined models of both the cask and target surface to predict the effects of the impact shock (vertical drop) on the package. The experimental tests and numerical analyses are thoroughly compared, presented and discussed. The numerical approach shows to be suitable to reproduce with good reliability the test situations and results

  4. Estimation of the breaking of rigor mortis by myotonometry.

    Science.gov (United States)

    Vain, A; Kauppila, R; Vuori, E

    1996-05-31

    Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.

  5. Long persistence of rigor mortis at constant low temperature.

    Science.gov (United States)

    Varetto, Lorenzo; Curto, Ombretta

    2005-01-06

    We studied the persistence of rigor mortis by using physical manipulation. We tested the mobility of the knee on 146 corpses kept under refrigeration at Torino's city mortuary at a constant temperature of +4 degrees C. We found a persistence of complete rigor lasting for 10 days in all the cadavers we kept under observation; and in one case, rigor lasted for 16 days. Between the 11th and the 17th days, a progressively increasing number of corpses showed a change from complete into partial rigor (characterized by partial bending of the articulation). After the 17th day, all the remaining corpses showed partial rigor and in the two cadavers that were kept under observation "à outrance" we found the absolute resolution of rigor mortis occurred on the 28th day. Our results prove that it is possible to find a persistence of rigor mortis that is much longer than the expected when environmental conditions resemble average outdoor winter temperatures in temperate zones. Therefore, this datum must be considered when a corpse is found in those environmental conditions so that when estimating the time of death, we are not misled by the long persistence of rigor mortis.

  6. Scientific rigor through videogames.

    Science.gov (United States)

    Treuille, Adrien; Das, Rhiju

    2014-11-01

    Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Putrefactive rigor: apparent rigor mortis due to gas distension.

    Science.gov (United States)

    Gill, James R; Landi, Kristen

    2011-09-01

    Artifacts due to decomposition may cause confusion for the initial death investigator, leading to an incorrect suspicion of foul play. Putrefaction is a microorganism-driven process that results in foul odor, skin discoloration, purge, and bloating. Various decompositional gases including methane, hydrogen sulfide, carbon dioxide, and hydrogen will cause the body to bloat. We describe 3 instances of putrefactive gas distension (bloating) that produced the appearance of inappropriate rigor, so-called putrefactive rigor. These gases may distend the body to an extent that the extremities extend and lose contact with their underlying support surface. The medicolegal investigator must recognize that this is not true rigor mortis and the body was not necessarily moved after death for this gravity-defying position to occur.

  8. Rigorous numerical study of strong microwave photon-magnon coupling in all-dielectric magnetic multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Maksymov, Ivan S., E-mail: ivan.maksymov@uwa.edu.au [School of Physics M013, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia); ARC Centre of Excellence for Nanoscale BioPhotonics, School of Applied Sciences, RMIT University, Melbourne, VIC 3001 (Australia); Hutomo, Jessica; Nam, Donghee; Kostylev, Mikhail [School of Physics M013, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia)

    2015-05-21

    We demonstrate theoretically a ∼350-fold local enhancement of the intensity of the in-plane microwave magnetic field in multilayered structures made from a magneto-insulating yttrium iron garnet (YIG) layer sandwiched between two non-magnetic layers with a high dielectric constant matching that of YIG. The enhancement is predicted for the excitation regime when the microwave magnetic field is induced inside the multilayer by the transducer of a stripline Broadband Ferromagnetic Resonance (BFMR) setup. By means of a rigorous numerical solution of the Landau-Lifshitz-Gilbert equation consistently with the Maxwell's equations, we investigate the magnetisation dynamics in the multilayer. We reveal a strong photon-magnon coupling, which manifests itself as anti-crossing of the ferromagnetic resonance magnon mode supported by the YIG layer and the electromagnetic resonance mode supported by the whole multilayered structure. The frequency of the magnon mode depends on the external static magnetic field, which in our case is applied tangentially to the multilayer in the direction perpendicular to the microwave magnetic field induced by the stripline of the BFMR setup. The frequency of the electromagnetic mode is independent of the static magnetic field. Consequently, the predicted photon-magnon coupling is sensitive to the applied magnetic field and thus can be used in magnetically tuneable metamaterials based on simultaneously negative permittivity and permeability achievable thanks to the YIG layer. We also suggest that the predicted photon-magnon coupling may find applications in microwave quantum information systems.

  9. A rigorous test for a new conceptual model for collisions

    International Nuclear Information System (INIS)

    Peixoto, E.M.A.; Mu-Tao, L.

    1979-01-01

    A rigorous theoretical foundation for the previously proposed model is formulated and applied to electron scattering by H 2 in the gas phase. An rigorous treatment of the interaction potential between the incident electron and the Hydrogen molecule is carried out to calculate Differential Cross Sections for 1 KeV electrons, using Glauber's approximation Wang's molecular wave function for the ground electronic state of H 2 . Moreover, it is shown for the first time that, when adequately done, the omission of two center terms does not adversely influence the results of molecular calculations. It is shown that the new model is far superior to the Independent Atom Model (or Independent Particle Model). The accuracy and simplicity of the new model suggest that it may be fruitfully applied to the description of other collision phenomena (e.g., in molecular beam experiments and nuclear physics). A new techniques is presented for calculations involving two center integrals within the frame work of the Glauber's approximation for scattering. (Author) [pt

  10. Advanced productivity forecast using petrophysical wireline data calibrated with MDT tests and numerical reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Andre, Carlos de [PETROBRAS, Rio de Janeiro, RJ (Brazil); Canas, Jesus A.; Low, Steven; Barreto, Wesley [Schlumberger, Houston, TX (United States)

    2004-07-01

    This paper describes an integrated and rigorous approach for viscous and middle oil reservoir productivity evaluation using petrophysical models calibrated with permeability derived from mini tests (Dual Packer) and Vertical Interference Tests (VIT) from open hole wire line testers (MDT SLB TM). It describes the process from Dual Packer Test and VIT pre-job design, evaluation via analytical and inverse simulation modeling, calibration and up scaling of petrophysical data into a numerical model, history matching of Dual Packer Tests and VIT with numerical simulation modeling. Finally, after developing a dynamic calibrated model, we perform productivity forecasts of different well configurations (vertical, horizontal and multilateral wells) for several deep offshore oil reservoirs in order to support well testing activities and future development strategies. The objective was to characterize formation static and dynamic properties early in the field development process to optimize well testing design, extended well test (EWT) and support the development strategies in deep offshore viscous oil reservoirs. This type of oil has limitations to flow naturally to surface and special lifting equipment is required for smooth optimum well testing/production. The integrated analysis gave a good overall picture of the formation, including permeability anisotropy and fluid dynamics. Subsequent analysis of different well configurations and lifting schemes allows maximizing formation productivity. The simulation and calibration results are compared to measured well test data. Results from this work shows that if the various petrophysical and fluid properties sources are integrated properly an accurate well productivity model can be achieved. If done early in the field development program, this time/knowledge gain could reduce the risk and maximize the development profitability of new blocks (value of the information). (author)

  11. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    Science.gov (United States)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  12. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    Science.gov (United States)

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  13. Experimental evaluation of rigor mortis. V. Effect of various temperatures on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T

    1981-01-01

    Objective measurements were carried out to study the evolution of rigor mortis on rats at various temperatures. Our experiments showed that: (1) at 6 degrees C rigor mortis reaches full development between 48 and 60 hours post mortem, and is resolved at 168 hours post mortem; (2) at 24 degrees C rigor mortis reaches full development at 5 hours post mortem, and is resolved at 16 hours post mortem; (3) at 37 degrees C rigor mortis reaches full development at 3 hours post mortem, and is resolved at 6 hours post mortem; (4) the intensity of rigor mortis grows with increase in temperature (difference between values obtained at 24 degrees C and 37 degrees C); and (5) and 6 degrees C a "cold rigidity" was found, in addition to and independent of rigor mortis.

  14. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  15. Rigorous approximation of stationary measures and convergence to equilibrium for iterated function systems

    International Nuclear Information System (INIS)

    Galatolo, Stefano; Monge, Maurizio; Nisoli, Isaia

    2016-01-01

    We study the problem of the rigorous computation of the stationary measure and of the rate of convergence to equilibrium of an iterated function system described by a stochastic mixture of two or more dynamical systems that are either all uniformly expanding on the interval, either all contracting. In the expanding case, the associated transfer operators satisfy a Lasota–Yorke inequality, we show how to compute a rigorous approximations of the stationary measure in the L "1 norm and an estimate for the rate of convergence. The rigorous computation requires a computer-aided proof of the contraction of the transfer operators for the maps, and we show that this property propagates to the transfer operators of the IFS. In the contracting case we perform a rigorous approximation of the stationary measure in the Wasserstein–Kantorovich distance and rate of convergence, using the same functional analytic approach. We show that a finite computation can produce a realistic computation of all contraction rates for the whole parameter space. We conclude with a description of the implementation and numerical experiments. (paper)

  16. The Rigor Mortis of Education: Rigor Is Required in a Dying Educational System

    Science.gov (United States)

    Mixon, Jason; Stuart, Jerry

    2009-01-01

    In an effort to answer the "Educational Call to Arms", our national public schools have turned to Advanced Placement (AP) courses as the predominate vehicle used to address the lack of academic rigor in our public high schools. Advanced Placement is believed by many to provide students with the rigor and work ethic necessary to…

  17. Realizing rigor in the mathematics classroom

    CERN Document Server

    Hull, Ted H (Henry); Balka, Don S

    2014-01-01

    Rigor put within reach! Rigor: The Common Core has made it policy-and this first-of-its-kind guide takes math teachers and leaders through the process of making it reality. Using the Proficiency Matrix as a framework, the authors offer proven strategies and practical tools for successful implementation of the CCSS mathematical practices-with rigor as a central objective. You'll learn how to Define rigor in the context of each mathematical practice Identify and overcome potential issues, including differentiating instruction and using data

  18. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems.

    Science.gov (United States)

    Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E

    2011-12-22

    Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

  19. Application of the rigorous method to x-ray and neutron beam scattering on rough surfaces

    International Nuclear Information System (INIS)

    Goray, Leonid I.

    2010-01-01

    The paper presents a comprehensive numerical analysis of x-ray and neutron scattering from finite-conducting rough surfaces which is performed in the frame of the boundary integral equation method in a rigorous formulation for high ratios of characteristic dimension to wavelength. The single integral equation obtained involves boundary integrals of the single and double layer potentials. A more general treatment of the energy conservation law applicable to absorption gratings and rough mirrors is considered. In order to compute the scattering intensity of rough surfaces using the forward electromagnetic solver, Monte Carlo simulation is employed to average the deterministic diffraction grating efficiency due to individual surfaces over an ensemble of realizations. Some rules appropriate for numerical implementation of the theory at small wavelength-to-period ratios are presented. The difference between the rigorous approach and approximations can be clearly seen in specular reflectances of Au mirrors with different roughness parameters at wavelengths where grazing incidence occurs at close to or larger than the critical angle. This difference may give rise to wrong estimates of rms roughness and correlation length if they are obtained by comparing experimental data with calculations. Besides, the rigorous approach permits taking into account any known roughness statistics and allows exact computation of diffuse scattering.

  20. Increased scientific rigor will improve reliability of research and effectiveness of management

    Science.gov (United States)

    Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.

    2018-01-01

    Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and

  1. Experimental evaluation of rigor mortis. VI. Effect of various causes of death on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R

    1983-07-01

    The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.

  2. Numerical simulations of rubber bearing tests and shaking table tests

    International Nuclear Information System (INIS)

    Hirata, K.; Matsuda, A.; Yabana, S.

    2002-01-01

    Test data concerning rubber bearing tests and shaking table tests of base-isolated model conducted by CRIEPI are provided to the participants of Coordinated Research Program (CRP) on 'Intercomparison of Analysis Methods for predicting the behaviour of Seismically Isolated Nuclear Structure', which is organized by International Atomic Energy Agency (IAEA), for the comparison study of numerical simulation of base-isolated structure. In this paper outlines of the test data provided and the numerical simulations of bearing tests and shaking table tests are described. Using computer code ABAQUS, numerical simulations of rubber bearing tests are conducted for NRBs, LRBs (data provided by CRIEPI) and for HDRs (data provided by ENEA/ENEL and KAERI). Several strain energy functions are specified according to the rubber material test corresponding to each rubber bearing. As for lead plug material in LRB, mechanical characteristics are reevaluated and are made use of. Simulation results for these rubber bearings show satisfactory agreement with the test results. Shaking table test conducted by CRIEPI is of a base isolated rigid mass supported by LRB. Acceleration time histories, displacement time histories of the isolators as well as cyclic loading test data of the LRB used for the shaking table test are provided to the participants of the CRP. Simulations of shaking table tests are conducted for this rigid mass, and also for the steel frame model which is conducted by ENEL/ENEA. In the simulation of the rigid mass model test, where LRBs are used, isolators are modeled either by bilinear model or polylinear model. In both cases of modeling of isolators, simulation results show good agreement with the test results. In the case of the steel frame model, where HDRs are used as isolators, bilinear model and polylinear model are also used for modeling isolators. The response of the model is simulated comparatively well in the low frequency range of the floor response, however, in

  3. Rigorous simulations of a helical core fiber by the use of transformation optics formalism.

    Science.gov (United States)

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2014-09-22

    We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.

  4. Rigorous Science: a How-To Guide

    Directory of Open Access Journals (Sweden)

    Arturo Casadevall

    2016-11-01

    Full Text Available Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education.

  5. Differential algebras with remainder and rigorous proofs of long-term stability

    International Nuclear Information System (INIS)

    Berz, Martin

    1997-01-01

    It is shown how in addition to determining Taylor maps of general optical systems, it is possible to obtain rigorous interval bounds for the remainder term of the n-th order Taylor expansion. To this end, the three elementary operations of addition, multiplication, and differentiation in the Differential Algebraic approach are augmented by suitable interval operations in such a way that a remainder bound of the sum, product, and derivative is obtained from the Taylor polynomial and remainder bound of the operands. The method can be used to obtain bounds for the accuracy with which a Taylor map represents the true map of the particle optical system. In a more general sense, it is also useful for a variety of other numerical problems, including rigorous global optimization of highly complex functions. Combined with methods to obtain pseudo-invariants of repetitive motion and extensions of the Lyapunov- and Nekhoroshev stability theory, the latter can be used to guarantee stability for storage rings and other weakly nonlinear systems

  6. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    Science.gov (United States)

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  7. Experimental evaluation of rigor mortis. VII. Effect of ante- and post-mortem electrocution on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T; Bergerioux, C

    1988-01-01

    The influence of electrocution on the evolution of rigor mortis was studied on rats. Our experiments showed that: (1) Electrocution hastens the onset of rigor mortis. After an electrocution of 90 s, a complete rigor develops already 1 h post-mortem (p.m.) compared to 5 h p.m. for the controls. (2) Electrocution hastens the passing of rigor mortis. After an electrocution of 90 s, the first significant decrease occurs at 3 h p.m. (8 h p.m. in the controls). (3) These modifications in rigor mortis evolution are less pronounced in the limbs not directly touched by the electric current. (4) In case of post-mortem electrocution, the changes are slightly less pronounced, the resistance is higher and the absorbed energy is lower as compared with the ante-mortem electrocution cases. The results are completed by two practical observations on human electrocution cases.

  8. Rigorous Multicomponent Reactive Separations Modelling: Complete Consideration of Reaction-Diffusion Phenomena

    International Nuclear Information System (INIS)

    Ahmadi, A.; Meyer, M.; Rouzineau, D.; Prevost, M.; Alix, P.; Laloue, N.

    2010-01-01

    This paper gives the first step of the development of a rigorous multicomponent reactive separation model. Such a model is highly essential to further the optimization of acid gases removal plants (CO 2 capture, gas treating, etc.) in terms of size and energy consumption, since chemical solvents are conventionally used. Firstly, two main modelling approaches are presented: the equilibrium-based and the rate-based approaches. Secondly, an extended rate-based model with rigorous modelling methodology for diffusion-reaction phenomena is proposed. The film theory and the generalized Maxwell-Stefan equations are used in order to characterize multicomponent interactions. The complete chain of chemical reactions is taken into account. The reactions can be kinetically controlled or at chemical equilibrium, and they are considered for both liquid film and liquid bulk. Thirdly, the method of numerical resolution is described. Coupling the generalized Maxwell-Stefan equations with chemical equilibrium equations leads to a highly non-linear Differential-Algebraic Equations system known as DAE index 3. The set of equations is discretized with finite-differences as its integration by Gear method is complex. The resulting algebraic system is resolved by the Newton- Raphson method. Finally, the present model and the associated methods of numerical resolution are validated for the example of esterification of methanol. This archetype non-electrolytic system permits an interesting analysis of reaction impact on mass transfer, especially near the phase interface. The numerical resolution of the model by Newton-Raphson method gives good results in terms of calculation time and convergence. The simulations show that the impact of reactions at chemical equilibrium and that of kinetically controlled reactions with high kinetics on mass transfer is relatively similar. Moreover, the Fick's law is less adapted for multicomponent mixtures where some abnormalities such as counter

  9. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.

    1994-01-01

    sludge. Phenomena as free and hindered settling and the Bingham plastic characteristic of activated sludge suspensions are included in the numerical model. Further characterisation and test tank experiments are described. The characterisation experiments were designed to measure calibration parameters...... for model description of settling and density differences. In the test tank experiments, flow velocities and suspended sludge concentrations were measured with different tank inlet geomotry and hydraulic and sludge loads. The test tank experiments provided results for the calibration of the numerical model......A numerical model and measurements of flow and settling in activated sludge suspension is presented. The numerical model is an attempt to describe the complex and interrelated hydraulic and sedimentation phenomena by describing the turbulent flow field and the transport/dispersion of suspended...

  10. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors

    Directory of Open Access Journals (Sweden)

    Spiros Pagiatakis

    2009-10-01

    Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  11. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    Science.gov (United States)

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  12. Numerical analysis

    CERN Document Server

    Scott, L Ridgway

    2011-01-01

    Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

  13. Numerical simulation of small scale soft impact tests

    International Nuclear Information System (INIS)

    Varpasuo, Pentti

    2008-01-01

    This paper describes the small scale soft missile impact tests. The purpose of the test program is to provide data for the calibration of the numerical simulation models for impact simulation. In the experiments, both dry and fluid filled missiles are used. The tests with fluid filled missiles investigate the release speed and the droplet size of the fluid release. This data is important in quantifying the fire hazard of flammable liquid after the release. The spray release velocity and droplet size are also input data for analytical and numerical simulation of the liquid spread in the impact. The behaviour of the impact target is the second investigative goal of the test program. The response of reinforced and pre-stressed concrete walls is studied with the aid of displacement and strain monitoring. (authors)

  14. Development of rigor mortis is not affected by muscle volume.

    Science.gov (United States)

    Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H

    2001-04-01

    There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.

  15. Theory and applications of numerical analysis

    CERN Document Server

    Phillips, G M

    1996-01-01

    This text is a self-contained Second Edition, providing an introductory account of the main topics in numerical analysis. The book emphasizes both the theorems which show the underlying rigorous mathematics andthe algorithms which define precisely how to program the numerical methods. Both theoretical and practical examples are included.* a unique blend of theory and applications* two brand new chapters on eigenvalues and splines* inclusion of formal algorithms* numerous fully worked examples* a large number of problems, many with solutions

  16. Steel Fibers Reinforced Concrete Pipes - Experimental Tests and Numerical Simulation

    Science.gov (United States)

    Doru, Zdrenghea

    2017-10-01

    The paper presents in the first part a state of the art review of reinforced concrete pipes used in micro tunnelling realised through pipes jacking method and design methods for steel fibres reinforced concrete. In part two experimental tests are presented on inner pipes with diameters of 1410mm and 2200mm, and specimens (100x100x500mm) of reinforced concrete with metal fibres (35 kg / m3). In part two experimental tests are presented on pipes with inner diameters of 1410mm and 2200mm, and specimens (100x100x500mm) of reinforced concrete with steel fibres (35 kg / m3). The results obtained are analysed and are calculated residual flexural tensile strengths which characterise the post-cracking behaviour of steel fibres reinforced concrete. In the third part are presented numerical simulations of the tests of pipes and specimens. The model adopted for the pipes test was a three-dimensional model and loads considered were those obtained in experimental tests at reaching breaking forces. Tensile stresses determined were compared with mean flexural tensile strength. To validate tensile parameters of steel fibres reinforced concrete, experimental tests of the specimens were modelled with MIDAS program to reproduce the flexural breaking behaviour. To simulate post - cracking behaviour was used the method σ — ε based on the relationship stress - strain, according to RILEM TC 162-TDF. For the specimens tested were plotted F — δ diagrams, which have been superimposed for comparison with the similar diagrams of experimental tests. The comparison of experimental results with those obtained from numerical simulation leads to the following conclusions: - the maximum forces obtained by numerical calculation have higher values than the experimental values for the same tensile stresses; - forces corresponding of residual strengths have very similar values between the experimental and numerical calculations; - generally the numerical model estimates a breaking force greater

  17. A case of instantaneous rigor?

    Science.gov (United States)

    Pirch, J; Schulz, Y; Klintschar, M

    2013-09-01

    The question of whether instantaneous rigor mortis (IR), the hypothetic sudden occurrence of stiffening of the muscles upon death, actually exists has been controversially debated over the last 150 years. While modern German forensic literature rejects this concept, the contemporary British literature is more willing to embrace it. We present the case of a young woman who suffered from diabetes and who was found dead in an upright standing position with back and shoulders leaned against a punchbag and a cupboard. Rigor mortis was fully established, livor mortis was strong and according to the position the body was found in. After autopsy and toxicological analysis, it was stated that death most probably occurred due to a ketoacidotic coma with markedly increased values of glucose and lactate in the cerebrospinal fluid as well as acetone in blood and urine. Whereas the position of the body is most unusual, a detailed analysis revealed that it is a stable position even without rigor mortis. Therefore, this case does not further support the controversial concept of IR.

  18. Mathematical Rigor in Introductory Physics

    Science.gov (United States)

    Vandyke, Michael; Bassichis, William

    2011-10-01

    Calculus-based introductory physics courses intended for future engineers and physicists are often designed and taught in the same fashion as those intended for students of other disciplines. A more mathematically rigorous curriculum should be more appropriate and, ultimately, more beneficial for the student in his or her future coursework. This work investigates the effects of mathematical rigor on student understanding of introductory mechanics. Using a series of diagnostic tools in conjunction with individual student course performance, a statistical analysis will be performed to examine student learning of introductory mechanics and its relation to student understanding of the underlying calculus.

  19. "Rigor mortis" in a live patient.

    Science.gov (United States)

    Chakravarthy, Murali

    2010-03-01

    Rigor mortis is conventionally a postmortem change. Its occurrence suggests that death has occurred at least a few hours ago. The authors report a case of "Rigor Mortis" in a live patient after cardiac surgery. The likely factors that may have predisposed such premortem muscle stiffening in the reported patient are, intense low cardiac output status, use of unusually high dose of inotropic and vasopressor agents and likely sepsis. Such an event may be of importance while determining the time of death in individuals such as described in the report. It may also suggest requirement of careful examination of patients with muscle stiffening prior to declaration of death. This report is being published to point out the likely controversies that might arise out of muscle stiffening, which should not always be termed rigor mortis and/ or postmortem.

  20. Classroom Talk for Rigorous Reading Comprehension Instruction

    Science.gov (United States)

    Wolf, Mikyung Kim; Crosson, Amy C.; Resnick, Lauren B.

    2004-01-01

    This study examined the quality of classroom talk and its relation to academic rigor in reading-comprehension lessons. Additionally, the study aimed to characterize effective questions to support rigorous reading comprehension lessons. The data for this study included 21 reading-comprehension lessons in several elementary and middle schools from…

  1. Summary of Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    International Nuclear Information System (INIS)

    Taylor, S.R.; Kamm, J.R.

    1993-01-01

    This document contains the Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium held in Durango, Colorado on March 23-25, 1993. The symposium was sponsored by the Office of Arms Control and Nonproliferation of the United States Department of Energy and hosted by the Source Region Program of Los Alamos National Laboratory. The purpose of the meeting was to discuss state-of-the-art advances in numerical simulations of nuclear explosion phenomenology for the purpose of test ban monitoring. Another goal of the symposium was to promote discussion between seismologists and explosion source-code calculators. Presentation topics include the following: numerical model fits to data, measurement and characterization of material response models, applications of modeling to monitoring problems, explosion source phenomenology, numerical simulations and seismic sources

  2. Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation

    Directory of Open Access Journals (Sweden)

    Ümit Çiftçi

    2010-03-01

    Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test

  3. A rigorous phenomenological analysis of the ππ scattering lengths

    International Nuclear Information System (INIS)

    Caprini, I.; Dita, P.; Sararu, M.

    1979-11-01

    The constraining power of the present experimental data, combined with the general theoretical knowledge about ππ scattering, upon the scattering lengths of this process, is investigated by means of a rigorous functional method. We take as input the experimental phase shifts and make no hypotheses about the high energy behaviour of the amplitudes, using only absolute bounds derived from axiomatic field theory and exact consequences of crossing symmetry. In the simplest application of the method, involving only the π 0 π 0 S-wave, we explored numerically a number of values proposed by various authors for the scattering lengths a 0 and a 2 and found that no one appears to be especially favoured. (author)

  4. Experimental evaluation of rigor mortis. III. Comparative study of the evolution of rigor mortis in different sized muscle groups in rats.

    Science.gov (United States)

    Krompecher, T; Fryc, O

    1978-01-01

    The use of new methods and an appropriate apparatus has allowed us to make successive measurements of rigor mortis and a study of its evolution in the rat. By a comparative examination on the front and hind limbs, we have determined the following: (1) The muscular mass of the hind limbs is 2.89 times greater than that of the front limbs. (2) In the initial phase rigor mortis is more pronounced in the front limbs. (3) The front and hind limbs reach maximum rigor mortis at the same time and this state is maintained for 2 hours. (4) Resolution of rigor mortis is accelerated in the front limbs during the initial phase, but both front and hind limbs reach complete resolution at the same time.

  5. [Experimental study of restiffening of the rigor mortis].

    Science.gov (United States)

    Wang, X; Li, M; Liao, Z G; Yi, X F; Peng, X M

    2001-11-01

    To observe changes of the length of sarcomere of rat when restiffening. We measured the length of sarcomere of quadriceps in 40 rats in different condition by scanning electron microscope. The length of sarcomere of rigor mortis without destroy is obviously shorter than that of restiffening. The length of sarcomere is negatively correlative to the intensity of rigor mortis. Measuring the length of sarcomere can determine the intensity of rigor mortis and provide evidence for estimation of time since death.

  6. [Rigor mortis -- a definite sign of death?].

    Science.gov (United States)

    Heller, A R; Müller, M P; Frank, M D; Dressler, J

    2005-04-01

    In the past years an ongoing controversial debate exists in Germany, regarding quality of the coroner's inquest and declaration of death by physicians. We report the case of a 90-year old female, who was found after an unknown time following a suicide attempt with benzodiazepine. The examination of the patient showed livores (mortis?) on the left forearm and left lower leg. Moreover, rigor (mortis?) of the left arm was apparent which prevented arm flexion and extension. The hypothermic patient with insufficient respiration was intubated and mechanically ventilated. Chest compressions were not performed, because central pulses were (hardly) palpable and a sinus bradycardia 45/min (AV-block 2 degrees and sole premature ventricular complexes) was present. After placement of an intravenous line (17 G, external jugular vein) the hemodynamic situation was stabilized with intermittent boli of epinephrine and with sodium bicarbonate. With improved circulation livores and rigor disappeared. In the present case a minimal central circulation was noted, which could be stabilized, despite the presence of certain signs of death ( livores and rigor mortis). Considering the finding of an abrogated peripheral perfusion (livores), we postulate a centripetal collapse of glycogen and ATP supply in the patients left arm (rigor), which was restored after resuscitation and reperfusion. Thus, it appears that livores and rigor are not sensitive enough to exclude a vita minima, in particular in hypothermic patients with intoxications. Consequently a careful ABC-check should be performed even in the presence of apparently certain signs of death, to avoid underdiagnosing a vita minima. Additional ECG- monitoring is required to reduce the rate of false positive declarations of death. To what extent basic life support by paramedics should commence when rigor and livores are present until physician DNR order, deserves further discussion.

  7. Parent Management Training-Oregon Model: Adapting Intervention with Rigorous Research.

    Science.gov (United States)

    Forgatch, Marion S; Kjøbli, John

    2016-09-01

    Parent Management Training-Oregon Model (PMTO(®) ) is a set of theory-based parenting programs with status as evidence-based treatments. PMTO has been rigorously tested in efficacy and effectiveness trials in different contexts, cultures, and formats. Parents, the presumed agents of change, learn core parenting practices, specifically skill encouragement, limit setting, monitoring/supervision, interpersonal problem solving, and positive involvement. The intervention effectively prevents and ameliorates children's behavior problems by replacing coercive interactions with positive parenting practices. Delivery format includes sessions with individual families in agencies or families' homes, parent groups, and web-based and telehealth communication. Mediational models have tested parenting practices as mechanisms of change for children's behavior and found support for the theory underlying PMTO programs. Moderating effects include children's age, maternal depression, and social disadvantage. The Norwegian PMTO implementation is presented as an example of how PMTO has been tailored to reach diverse populations as delivered by multiple systems of care throughout the nation. An implementation and research center in Oslo provides infrastructure and promotes collaboration between practitioners and researchers to conduct rigorous intervention research. Although evidence-based and tested within a wide array of contexts and populations, PMTO must continue to adapt to an ever-changing world. © 2016 Family Process Institute.

  8. Polynomial model inversion control: numerical tests and applications

    OpenAIRE

    Novara, Carlo

    2015-01-01

    A novel control design approach for general nonlinear systems is described in this paper. The approach is based on the identification of a polynomial model of the system to control and on the on-line inversion of this model. Extensive simulations are carried out to test the numerical efficiency of the approach. Numerical examples of applicative interest are presented, concerned with control of the Duffing oscillator, control of a robot manipulator and insulin regulation in a type 1 diabetic p...

  9. Monitoring muscle optical scattering properties during rigor mortis

    Science.gov (United States)

    Xia, J.; Ranasinghesagara, J.; Ku, C. W.; Yao, G.

    2007-09-01

    Sarcomere is the fundamental functional unit in skeletal muscle for force generation. In addition, sarcomere structure is also an important factor that affects the eating quality of muscle food, the meat. The sarcomere structure is altered significantly during rigor mortis, which is the critical stage involved in transforming muscle to meat. In this paper, we investigated optical scattering changes during the rigor process in Sternomandibularis muscles. The measured optical scattering parameters were analyzed along with the simultaneously measured passive tension, pH value, and histology analysis. We found that the temporal changes of optical scattering, passive tension, pH value and fiber microstructures were closely correlated during the rigor process. These results suggested that sarcomere structure changes during rigor mortis can be monitored and characterized by optical scattering, which may find practical applications in predicting meat quality.

  10. Confidence in Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  11. New rigorous asymptotic theorems for inverse scattering amplitudes

    International Nuclear Information System (INIS)

    Lomsadze, Sh.Yu.; Lomsadze, Yu.M.

    1984-01-01

    The rigorous asymptotic theorems both of integral and local types obtained earlier and establishing logarithmic and in some cases even power correlations aetdeen the real and imaginary parts of scattering amplitudes Fsub(+-) are extended to the inverse amplitudes 1/Fsub(+-). One also succeeds in establishing power correlations of a new type between the real and imaginary parts, both for the amplitudes themselves and for the inverse ones. All the obtained assertions are convenient to be tested in high energy experiments when the amplitudes show asymptotic behaviour

  12. Transient productivity index for numerical well test simulations

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, G.; Ding, D.Y.; Ene, A. [Institut Francais du Petrole, Pau (France)] [and others

    1997-08-01

    The most difficult aspect of numerical simulation of well tests is the treatment of the Bottom Hole Flowing (BHF) Pressure. In full field simulations, this pressure is derived from the Well-block Pressure (WBP) using a numerical productivity index which accounts for the grid size and permeability, and for the well completion. This productivity index is calculated assuming a pseudo-steady state flow regime in the vicinity of the well and is therefore constant during the well production period. Such a pseudo-steady state assumption is no longer valid for the early time of a well test simulation as long as the pressure perturbation has not reached several grid-blocks around the well. This paper offers two different solutions to this problem: (1) The first one is based on the derivation of a Numerical Transient Productivity Index (NTPI) to be applied to Cartesian grids; (2) The second one is based on the use of a Corrected Transmissibility and Accumulation Term (CTAT) in the flow equation. The representation of the pressure behavior given by both solutions is far more accurate than the conventional one as shown by several validation examples which are presented in the following pages.

  13. Evaluating Rigor in Qualitative Methodology and Research Dissemination

    Science.gov (United States)

    Trainor, Audrey A.; Graue, Elizabeth

    2014-01-01

    Despite previous and successful attempts to outline general criteria for rigor, researchers in special education have debated the application of rigor criteria, the significance or importance of small n research, the purpose of interpretivist approaches, and the generalizability of qualitative empirical results. Adding to these complications, the…

  14. An ultramicroscopic study on rigor mortis.

    Science.gov (United States)

    Suzuki, T

    1976-01-01

    Gastrocnemius muscles taken from decapitated mice at various intervals after death and from mice killed by 2,4-dinitrophenol or mono-iodoacetic acid injection to induce rigor mortis soon after death, were observed by electron microscopy. The prominent appearance of many fine cross striations in the myofibrils (occurring about every 400 A) was considered to be characteristic of rigor mortis. These striations were caused by minute granules studded along the surfaces of both thick and thin filaments and appeared to be the bridges connecting the 2 kinds of filaments and accounted for the hardness and rigidity of the muscle.

  15. Numerical study of thermal test of a cask of transportation for radioactive material

    International Nuclear Information System (INIS)

    Vieira, Tiago A.S.; Santos, André A.C. dos; Vidal, Guilherme A.M.; Silva Junior, Geraldo E.

    2017-01-01

    In this study numerical simulations of a transport cask for radioactive material were made and the numerical results were compared with experimental results of tests carried out in two different opportunities. A mesh study was also made regarding the previously designed geometry of the same cask, in order to evaluate its impact in relation to the stability of numerical results for this type of problem. The comparison of the numerical and experimental results allowed to evaluate the need to plan and carry out a new test in order to validate the CFD codes used in the numerical simulations

  16. Tenderness of pre- and post rigor lamb longissimus muscle.

    Science.gov (United States)

    Geesink, Geert; Sujang, Sadi; Koohmaraie, Mohammad

    2011-08-01

    Lamb longissimus muscle (n=6) sections were cooked at different times post mortem (prerigor, at rigor, 1dayp.m., and 7 days p.m.) using two cooking methods. Using a boiling waterbath, samples were either cooked to a core temperature of 70 °C or boiled for 3h. The latter method was meant to reflect the traditional cooking method employed in countries where preparation of prerigor meat is practiced. The time postmortem at which the meat was prepared had a large effect on the tenderness (shear force) of the meat (PCooking prerigor and at rigor meat to 70 °C resulted in higher shear force values than their post rigor counterparts at 1 and 7 days p.m. (9.4 and 9.6 vs. 7.2 and 3.7 kg, respectively). The differences in tenderness between the treatment groups could be largely explained by a difference in contraction status of the meat after cooking and the effect of ageing on tenderness. Cooking pre and at rigor meat resulted in severe muscle contraction as evidenced by the differences in sarcomere length of the cooked samples. Mean sarcomere lengths in the pre and at rigor samples ranged from 1.05 to 1.20 μm. The mean sarcomere length in the post rigor samples was 1.44 μm. Cooking for 3 h at 100 °C did improve the tenderness of pre and at rigor prepared meat as compared to cooking to 70 °C, but not to the extent that ageing did. It is concluded that additional intervention methods are needed to improve the tenderness of prerigor cooked meat. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Fast rigorous numerical method for the solution of the anisotropic neutron transport problem and the NITRAN system for fusion neutronics application. Pt. 1

    International Nuclear Information System (INIS)

    Takahashi, A.; Rusch, D.

    1979-07-01

    Some recent neutronics experiments for fusion reactor blankets show that the precise treatment of anisotropic secondary emissions for all types of neutron scattering is needed for neutron transport calculations. In the present work new rigorous methods, i.e. based on non-approximative microscopic neutron balance equations, are applied to treat the anisotropic collision source term in transport equations. The collision source calculation is free from approximations except for the discretization of energy, angle and space variables and includes the rigorous treatment of nonelastic collisions, as far as nuclear data are given. Two methods are presented: first the Ii-method, which relies on existing nuclear data files and then, as an ultimate goal, the I*-method, which aims at the use of future double-differential cross section data, but which is also applicable to the present single-differential data basis to allow a smooth transition to the new data type. An application of the Ii-method is given in the code system NITRAN which employs the Ssub(N)-method to solve the transport equations. Both rigorous methods, the Ii- and the I*-method, are applicable to all radiation transport problems and they can be used also in the Monte-Carlo-method to solve the transport problem. (orig./RW) [de

  18. Rigorous numerical approximation of Ruelle–Perron–Frobenius operators and topological pressure of expanding maps

    International Nuclear Information System (INIS)

    Terhesiu, Dalia; Froyland, Gary

    2008-01-01

    It is well known that for different classes of transformations, including the class of piecewise C 2 expanding maps T : [0, 1] O, Ulam's method is an efficient way to numerically approximate the absolutely continuous invariant measure of T. We develop a new extension of Ulam's method and prove that this extension can be used for the numerical approximation of the Ruelle–Perron–Frobenius operator associated with T and the potential φ β = −β log |T | |, where β element of R. In particular, we prove that our extended Ulam's method is a powerful tool for computing the topological pressure P(T, φ β ) and the density of the equilibrium state

  19. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    OpenAIRE

    Bustos, Claudio; Herrera, Claudio García; Celentano, Diego; Chen, Daming; Cruchaga, Marcela

    2016-01-01

    Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an...

  20. Confidence in Numerical Simulations

    International Nuclear Information System (INIS)

    Hemez, Francois M.

    2015-01-01

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  1. Numerical Simulation and Experimental Validation of the Inflation Test of Latex Balloons

    Directory of Open Access Journals (Sweden)

    Claudio Bustos

    Full Text Available Abstract Experiments and modeling aimed at assessing the mechanical response of latex balloons in the inflation test are presented. To this end, the hyperelastic Yeoh material model is firstly characterized via tensile test and, then, used to numerically simulate via finite elements the stress-strain evolution during the inflation test. The numerical pressure-displacement curves are validated with those obtained experimentally. Moreover, this analysis is extended to a biomedical problem of an eyeball under glaucoma conditions.

  2. Emergency cricothyrotomy for trismus caused by instantaneous rigor in cardiac arrest patients.

    Science.gov (United States)

    Lee, Jae Hee; Jung, Koo Young

    2012-07-01

    Instantaneous rigor as muscle stiffening occurring in the moment of death (or cardiac arrest) can be confused with rigor mortis. If trismus is caused by instantaneous rigor, orotracheal intubation is impossible and a surgical airway should be secured. Here, we report 2 patients who had emergency cricothyrotomy for trismus caused by instantaneous rigor. This case report aims to help physicians understand instantaneous rigor and to emphasize the importance of securing a surgical airway quickly on the occurrence of trismus. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Rigorous vector wave propagation for arbitrary flat media

    Science.gov (United States)

    Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.

    2017-08-01

    Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.

  4. Numerical simulation of the tip aerodynamics and acoustics test

    Science.gov (United States)

    Tejero E, F.; Doerffer, P.; Szulc, O.; Cross, J. L.

    2016-04-01

    The application of an efficient flow control system on helicopter rotor blades may lead to improved aerodynamic performance. Recently, our invention of Rod Vortex Generators (RVGs) has been analyzed for helicopter rotor blades in hover with success. As a step forward, the study has been extended to forward flight conditions. For this reason, a validation of the numerical modelling for a reference helicopter rotor (without flow control) is needed. The article presents a study of the flow-field of the AH-1G helicopter rotor in low-, medium- and high-speed forward flight. The CFD code FLOWer from DLR has proven to be a suitable tool for the aerodynamic analysis of the two-bladed rotor without any artificial wake modelling. It solves the URANS equations with LEA (Linear Explicit Algebraic stress) k-ω model using the chimera overlapping grids technique. Validation of the numerical model uses comparison with the detailed flight test data gathered by Cross J. L. and Watts M. E. during the Tip Aerodynamics and Acoustics Test (TAAT) conducted at NASA in 1981. Satisfactory agreements for all speed regimes and a presence of significant flow separation in high-speed forward flight suggest a possible benefit from the future implementation of RVGs. The numerical results based on the URANS approach are presented not only for a popular, low-speed case commonly used in rotorcraft community for CFD codes validation but preferably for medium- and high-speed test conditions that have not been published to date.

  5. Rigorous Quantum Field Theory A Festschrift for Jacques Bros

    CERN Document Server

    Monvel, Anne Boutet; Iagolnitzer, Daniel; Moschella, Ugo

    2007-01-01

    Jacques Bros has greatly advanced our present understanding of rigorous quantum field theory through numerous fundamental contributions. This book arose from an international symposium held in honour of Jacques Bros on the occasion of his 70th birthday, at the Department of Theoretical Physics of the CEA in Saclay, France. The impact of the work of Jacques Bros is evident in several articles in this book. Quantum fields are regarded as genuine mathematical objects, whose various properties and relevant physical interpretations must be studied in a well-defined mathematical framework. The key topics in this volume include analytic structures of Quantum Field Theory (QFT), renormalization group methods, gauge QFT, stability properties and extension of the axiomatic framework, QFT on models of curved spacetimes, QFT on noncommutative Minkowski spacetime. Contributors: D. Bahns, M. Bertola, R. Brunetti, D. Buchholz, A. Connes, F. Corbetta, S. Doplicher, M. Dubois-Violette, M. Dütsch, H. Epstein, C.J. Fewster, K....

  6. Statistical mechanics rigorous results

    CERN Document Server

    Ruelle, David

    1999-01-01

    This classic book marks the beginning of an era of vigorous mathematical progress in equilibrium statistical mechanics. Its treatment of the infinite system limit has not been superseded, and the discussion of thermodynamic functions and states remains basic for more recent work. The conceptual foundation provided by the Rigorous Results remains invaluable for the study of the spectacular developments of statistical mechanics in the second half of the 20th century.

  7. High and low rigor temperature effects on sheep meat tenderness and ageing.

    Science.gov (United States)

    Devine, Carrick E; Payne, Steven R; Peachey, Bridget M; Lowe, Timothy E; Ingram, John R; Cook, Christian J

    2002-02-01

    Immediately after electrical stimulation, the paired m. longissimus thoracis et lumborum (LT) of 40 sheep were boned out and wrapped tightly with a polyethylene cling film. One of the paired LT's was chilled in 15°C air to reach a rigor mortis (rigor) temperature of 18°C and the other side was placed in a water bath at 35°C and achieved rigor at this temperature. Wrapping reduced rigor shortening and mimicked meat left on the carcass. After rigor, the meat was aged at 15°C for 0, 8, 26 and 72 h and then frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1×1 cm cross-section. The shear force values of meat for 18 and 35°C rigor were similar at zero ageing, but as ageing progressed, the 18 rigor meat aged faster and became more tender than meat that went into rigor at 35°C (Prigor at each ageing time were significantly different (Prigor were still significantly greater. Thus the toughness of 35°C meat was not a consequence of muscle shortening and appears to be due to both a faster rate of tenderisation and the meat tenderising to a greater extent at the lower temperature. The cook loss at 35°C rigor (30.5%) was greater than that at 18°C rigor (28.4%) (P<0.01) and the colour Hunter L values were higher at 35°C (P<0.01) compared with 18°C, but there were no significant differences in a or b values.

  8. Physiological studies of muscle rigor mortis in the fowl

    International Nuclear Information System (INIS)

    Nakahira, S.; Kaneko, K.; Tanaka, K.

    1990-01-01

    A simple system was developed for continuous measurement of muscle contraction during nor mortis. Longitudinal muscle strips dissected from the Peroneus Longus were suspended in a plastic tube containing liquid paraffin. Mechanical activity was transmitted to a strain-gauge transducer which is connected to a potentiometric pen-recorder. At the onset of measurement 1.2g was loaded on the muscle strip. This model was used to study the muscle response to various treatments during nor mortis. All measurements were carried out under the anaerobic condition at 17°C, except otherwise stated. 1. The present system was found to be quite useful for continuous measurement of muscle rigor course. 2. Muscle contraction under the anaerobic condition at 17°C reached a peak about 2 hours after the onset of measurement and thereafter it relaxed at a slow rate. In contrast, the aerobic condition under a high humidity resulted in a strong rigor, about three times stronger than that in the anaerobic condition. 3. Ultrasonic treatment (37, 000-47, 000Hz) at 25°C for 10 minutes resulted in a moderate muscle rigor. 4. Treatment of muscle strip with 2mM EGTA at 30°C for 30 minutes led to a relaxation of the muscle. 5. The muscle from the birds killed during anesthesia with pentobarbital sodium resulted in a slow rate of rigor, whereas the birds killed one day after hypophysectomy led to a quick muscle rigor as seen in intact controls. 6. A slight muscle rigor was observed when muscle strip was placed in a refrigerator at 0°C for 18.5 hours and thereafter temperature was kept at 17°C. (author)

  9. Direct integration of the S-matrix applied to rigorous diffraction

    International Nuclear Information System (INIS)

    Iff, W; Lindlein, N; Tishchenko, A V

    2014-01-01

    A novel Fourier method for rigorous diffraction computation at periodic structures is presented. The procedure is based on a differential equation for the S-matrix, which allows direct integration of the S-matrix blocks. This results in a new method in Fourier space, which can be considered as a numerically stable and well-parallelizable alternative to the conventional differential method based on T-matrix integration and subsequent conversions from the T-matrices to S-matrix blocks. Integration of the novel differential equation in implicit manner is expounded. The applicability of the new method is shown on the basis of 1D periodic structures. It is clear however, that the new technique can also be applied to arbitrary 2D periodic or periodized structures. The complexity of the new method is O(N 3 ) similar to the conventional differential method with N being the number of diffraction orders. (fast track communication)

  10. Rigorous simulation: a tool to enhance decision making

    Energy Technology Data Exchange (ETDEWEB)

    Neiva, Raquel; Larson, Mel; Baks, Arjan [KBC Advanced Technologies plc, Surrey (United Kingdom)

    2012-07-01

    The world refining industries continue to be challenged by population growth (increased demand), regional market changes and the pressure of regulatory requirements to operate a 'green' refinery. Environmental regulations are reducing the value and use of heavy fuel oils, and leading to convert more of the heavier products or even heavier crude into lighter products while meeting increasingly stringent transportation fuel specifications. As a result actions are required for establishing a sustainable advantage for future success. Rigorous simulation provides a key advantage improving the time and efficient use of capital investment and maximizing profitability. Sustainably maximizing profit through rigorous modeling is achieved through enhanced performance monitoring and improved Linear Programme (LP) model accuracy. This paper contains examples on these two items. The combination of both increases overall rates of return. As refiners consider optimizing existing assets and expanding projects, the process agreed to achieve these goals is key for a successful profit improvement. The benefit of rigorous kinetic simulation with detailed fractionation allows for optimizing existing asset utilization while focusing the capital investment in the new unit(s), and therefore optimizing the overall strategic plan and return on investment. Individual process unit's monitoring works as a mechanism for validating and optimizing the plant performance. Unit monitoring is important to rectify poor performance and increase profitability. The key to a good LP relies upon the accuracy of the data used to generate the LP sub-model data. The value of rigorous unit monitoring are that the results are heat and mass balanced consistently, and are unique for a refiners unit / refinery. With the improved match of the refinery operation, the rigorous simulation models will allow capturing more accurately the non linearity of those process units and therefore provide correct

  11. Rigorous solution to Bargmann-Wigner equation for integer spin

    CERN Document Server

    Huang Shi Zhong; Wu Ning; Zheng Zhi Peng

    2002-01-01

    A rigorous method is developed to solve the Bargamann-Wigner equation for arbitrary integer spin in coordinate representation in a step by step way. The Bargmann-Wigner equation is first transformed to a form easier to solve, the new equations are then solved rigorously in coordinate representation, and the wave functions in a closed form are thus derived

  12. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    Directory of Open Access Journals (Sweden)

    Yingjun Jiang

    2015-04-01

    Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.

  13. Numerical modelling of concentrated leak erosion during Hole Erosion Tests

    OpenAIRE

    Mercier, F.; Bonelli, S.; Golay, F.; Anselmet, F.; Philippe, P.; Borghi, R.

    2015-01-01

    This study focuses on the numerical modelling of concentrated leak erosion of a cohesive soil by a turbulent flow in axisymmetrical geometry, with application to the Hole Erosion Test (HET). The numerical model is based on adaptive remeshing of the water/soil interface to ensure accurate description of the mechanical phenomena occurring near the soil/water interface. The erosion law governing the interface motion is based on two erosion parameters: the critical shear stress and the erosion co...

  14. Testing the accuracy and stability of spectral methods in numerical relativity

    International Nuclear Information System (INIS)

    Boyle, Michael; Lindblom, Lee; Pfeiffer, Harald P.; Scheel, Mark A.; Kidder, Lawrence E.

    2007-01-01

    The accuracy and stability of the Caltech-Cornell pseudospectral code is evaluated using the Kidder, Scheel, and Teukolsky (KST) representation of the Einstein evolution equations. The basic 'Mexico City tests' widely adopted by the numerical relativity community are adapted here for codes based on spectral methods. Exponential convergence of the spectral code is established, apparently limited only by numerical roundoff error or by truncation error in the time integration. A general expression for the growth of errors due to finite machine precision is derived, and it is shown that this limit is achieved here for the linear plane-wave test

  15. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT.

    Science.gov (United States)

    Meltzer, S J; Auer, J

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions-nearly equimolecular to "physiological" solutions of sodium chloride-are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle.

  16. Numerical and field tests of hydraulic transients at Piva power plant

    International Nuclear Information System (INIS)

    Giljen, Z

    2014-01-01

    In 2009, a sophisticated field investigation was undertaken and later, in 2011, numerical tests were completed, on all three turbine units at the Piva hydroelectric power plant. These tests were made in order to assist in making decisions about the necessary scope of the reconstruction and modernisation of the Piva hydroelectric power plant, a plant originally constructed in the mid-1970s. More specifically, the investigation included several hydraulic conditions including both the start-up and stopping of each unit, load rejection under governor control from different initial powers, as well as emergency shut-down. Numerical results were obtained using the method of characteristics in a representation that included the full flow system and the characteristics of each associated Francis turbine. The impact of load rejection and emergency shut-down on the penstock pressure and turbine speed changes are reported and numerical and experimental results are compared, showing close agreement

  17. Numerical simulations of capillary barrier field tests

    International Nuclear Information System (INIS)

    Morris, C.E.; Stormont, J.C.

    1997-01-01

    Numerical simulations of two capillary barrier systems tested in the field were conducted to determine if an unsaturated flow model could accurately represent the observed results. The field data was collected from two 7-m long, 1.2-m thick capillary barriers built on a 10% grade that were being tested to investigate their ability to laterally divert water downslope. One system had a homogeneous fine layer, while the fine soil of the second barrier was layered to increase its ability to laterally divert infiltrating moisture. The barriers were subjected first to constant infiltration while minimizing evaporative losses and then were exposed to ambient conditions. The continuous infiltration period of the field tests for the two barrier systems was modelled to determine the ability of an existing code to accurately represent capillary barrier behavior embodied in these two designs. Differences between the field test and the model data were found, but in general the simulations appeared to adequately reproduce the response of the test systems. Accounting for moisture retention hysteresis in the layered system will potentially lead to more accurate modelling results and is likely to be important when developing reasonable predictions of capillary barrier behavior

  18. Numerical study of propagation effects in a wireless mesh test bed

    CSIR Research Space (South Africa)

    Lysko, AA

    2008-07-01

    Full Text Available The present layout of the indoor wireless mesh network test-bed build at the Meraka Institute is introduced. This is followed by a description of a numerical electromagnetic model for the complete test-bed, including the coupling and diffraction...

  19. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  20. Numerical simulation of impact tests on reinforced concrete beams

    International Nuclear Information System (INIS)

    Jiang, Hua; Wang, Xiaowo; He, Shuanhai

    2012-01-01

    Highlights: ► Predictions using advanced concrete model compare well with the impact test results. ► Several important behavior of concrete is discussed. ► Two mesh ways incorporating rebar into concrete mesh is also discussed. ► Gives a example of using EPDC model and references to develop new constitutive models. -- Abstract: This paper focuses on numerical simulation of impact tests of reinforced concrete (RC) beams by the LS-DYNA finite element (FE) code. In the FE model, the elasto-plastic damage cap (EPDC) model, which is based on continuum damage mechanics in combination with plasticity theory, is used for concrete, and the reinforcement is assumed to be elasto-plastic. The numerical results compares well with the experimental values reported in the literature, in terms of impact force history, mid-span deflection history and crack patterns of RC beams. By comparing the numerical and experimental results, several important behavior of concrete material is investigated, which includes: damage variable to describe the strain softening section of stress–strain curve; the cap surface to describe the plastic volume change; the shape of the meridian and deviatoric plane to describe the yield surface as well as two methods of incorporating rebar into concrete mesh. This study gives a good example of using EPDC model and can be utilized for the development new constitutive models for concrete in future.

  1. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret; Yu, Yi-Hsiang

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is a follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.

  2. NUMERICAL SIMULATION OF AN AGRICULTURAL SOIL SHEAR STRESS TEST

    Directory of Open Access Journals (Sweden)

    Andrea Formato

    2007-03-01

    Full Text Available In this work a numerical simulation of agricultural soil shear stress tests was performed through soil shear strength data detected by a soil shearometer. We used a soil shearometer available on the market to measure soil shear stress and constructed special equipment that enabled automated detection of soil shear stress. It was connected to an acquisition data system that displayed and recorded soil shear stress during the full field tests. A soil shearometer unit was used to the in situ measurements of soil shear stress in full field conditions for different types of soils located on the right side of the Sele river, at a distance of about 1 km from each other, along the perpendicular to the Sele river in the direction of the sea. Full field tests using the shearometer unit were performed alongside considered soil characteristic parameter data collection. These parameter values derived from hydrostatic compression and triaxial tests performed on considered soil samples and repeated 4 times and we noticed that the difference between the maximum and minimum values detected for every set of performed tests never exceeded 4%. Full field shear tests were simulated by the Abaqus program code considering three different material models of soils normally used in the literature, the Mohr-Coulomb, Drucker-Prager and Cam-Clay models. We then compared all data outcomes obtained by numerical simulations with those from the experimental tests. We also discussed any further simulation data results obtained with different material models and selected the best material model for each considered soil to be used in tyre/soil contact simulation or in soil compaction studies.

  3. The interior of axisymmetric and stationary black holes: Numerical and analytical studies

    International Nuclear Information System (INIS)

    Ansorg, Marcus; Hennig, Joerg

    2011-01-01

    We investigate the interior hyperbolic region of axisymmetric and stationary black holes surrounded by a matter distribution. First, we treat the corresponding initial value problem of the hyperbolic Einstein equations numerically in terms of a single-domain fully pseudo-spectral scheme. Thereafter, a rigorous mathematical approach is given, in which soliton methods are utilized to derive an explicit relation between the event horizon and an inner Cauchy horizon. This horizon arises as the boundary of the future domain of dependence of the event horizon. Our numerical studies provide strong evidence for the validity of the universal relation A + A - (8πJ) 2 where A + and A - are the areas of event and inner Cauchy horizon respectively, and J denotes the angular momentum. With our analytical considerations we are able to prove this relation rigorously.

  4. Rigorous bounds on the free energy of electron-phonon models

    NARCIS (Netherlands)

    Raedt, Hans De; Michielsen, Kristel

    1997-01-01

    We present a collection of rigorous upper and lower bounds to the free energy of electron-phonon models with linear electron-phonon interaction. These bounds are used to compare different variational approaches. It is shown rigorously that the ground states corresponding to the sharpest bounds do

  5. Accelerating Biomedical Discoveries through Rigor and Transparency.

    Science.gov (United States)

    Hewitt, Judith A; Brown, Liliana L; Murphy, Stephanie J; Grieder, Franziska; Silberberg, Shai D

    2017-07-01

    Difficulties in reproducing published research findings have garnered a lot of press in recent years. As a funder of biomedical research, the National Institutes of Health (NIH) has taken measures to address underlying causes of low reproducibility. Extensive deliberations resulted in a policy, released in 2015, to enhance reproducibility through rigor and transparency. We briefly explain what led to the policy, describe its elements, provide examples and resources for the biomedical research community, and discuss the potential impact of the policy on translatability with a focus on research using animal models. Importantly, while increased attention to rigor and transparency may lead to an increase in the number of laboratory animals used in the near term, it will lead to more efficient and productive use of such resources in the long run. The translational value of animal studies will be improved through more rigorous assessment of experimental variables and data, leading to better assessments of the translational potential of animal models, for the benefit of the research community and society. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. Numerical Simulation of Polynomial-Speed Convergence Phenomenon

    Science.gov (United States)

    Li, Yao; Xu, Hui

    2017-11-01

    We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to seek numerical justification for the ergodicity of two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.

  7. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  8. The effect of rigor mortis on the passage of erythrocytes and fluid through the myocardium of isolated dog hearts.

    Science.gov (United States)

    Nevalainen, T J; Gavin, J B; Seelye, R N; Whitehouse, S; Donnell, M

    1978-07-01

    The effect of normal and artificially induced rigor mortis on the vascular passage of erythrocytes and fluid through isolated dog hearts was studied. Increased rigidity of 6-mm thick transmural sections through the centre of the posterior papillary muscle was used as an indication of rigor. The perfusibility of the myocardium was tested by injecting 10 ml of 1% sodium fluorescein in Hanks solution into the circumflex branch of the left coronary artery. In prerigor hearts (20 minute incubation) fluorescein perfused the myocardium evenly whether or not it was preceded by an injection of 10 ml of heparinized dog blood. Rigor mortis developed in all hearts after 90 minutes incubation or within 20 minutes of perfusing the heart with 50 ml of 5 mM iodoacetate in Hanks solution. Fluorescein injected into hearts in rigor did not enter the posterior papillary muscle and adjacent subendocardium whether or not it was preceded by heparinized blood. Thus the vascular occlusion caused by rigor in the dog heart appears to be so effective that it prevents flow into the subendocardium of small soluble ions such as fluorescein.

  9. Trends: Rigor Mortis in the Arts.

    Science.gov (United States)

    Blodget, Alden S.

    1991-01-01

    Outlines how past art education provided a refuge for students from the rigors of other academic subjects. Observes that in recent years art education has become "discipline based." Argues that art educators need to reaffirm their commitment to a humanistic way of knowing. (KM)

  10. Fast numerical solution of KKR-CPA equations: Testing new algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, E.; Florio, G.M.; Ginatempo, B.; Giuliano, E.S. (Universita di Messina (Italy))

    1994-04-01

    Some numerical methods for the solution of KKR-CPA equations are discussed and tested. New, efficient, computational algorithms are proposed, allowing a remarkable reduction of computing time and a good reliability in evaluating spectral quantities. 16 refs., 7 figs.

  11. Using grounded theory as a method for rigorously reviewing literature

    NARCIS (Netherlands)

    Wolfswinkel, J.; Furtmueller-Ettinger, Elfriede; Wilderom, Celeste P.M.

    2013-01-01

    This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously

  12. Rigor, vigor, and the study of health disparities.

    Science.gov (United States)

    Adler, Nancy; Bush, Nicole R; Pantell, Matthew S

    2012-10-16

    Health disparities research spans multiple fields and methods and documents strong links between social disadvantage and poor health. Associations between socioeconomic status (SES) and health are often taken as evidence for the causal impact of SES on health, but alternative explanations, including the impact of health on SES, are plausible. Studies showing the influence of parents' SES on their children's health provide evidence for a causal pathway from SES to health, but have limitations. Health disparities researchers face tradeoffs between "rigor" and "vigor" in designing studies that demonstrate how social disadvantage becomes biologically embedded and results in poorer health. Rigorous designs aim to maximize precision in the measurement of SES and health outcomes through methods that provide the greatest control over temporal ordering and causal direction. To achieve precision, many studies use a single SES predictor and single disease. However, doing so oversimplifies the multifaceted, entwined nature of social disadvantage and may overestimate the impact of that one variable and underestimate the true impact of social disadvantage on health. In addition, SES effects on overall health and functioning are likely to be greater than effects on any one disease. Vigorous designs aim to capture this complexity and maximize ecological validity through more complete assessment of social disadvantage and health status, but may provide less-compelling evidence of causality. Newer approaches to both measurement and analysis may enable enhanced vigor as well as rigor. Incorporating both rigor and vigor into studies will provide a fuller understanding of the causes of health disparities.

  13. Tests of numerical simulation algorithms for the Kubo oscillator

    International Nuclear Information System (INIS)

    Fox, R.F.; Roy, R.; Yu, A.W.

    1987-01-01

    Numerical simulation algorithms for multiplicative noise (white or colored) are tested for accuracy against closed-form expressions for the Kubo oscillator. Direct white noise simulations lead to spurious decay of the modulus of the oscillator amplitude. A straightforward colored noise algorithm greatly reduces this decay and also provides highly accurate results in the white noise limit

  14. Numerical distribution functions of fractional unit root and cointegration tests

    DEFF Research Database (Denmark)

    MacKinnon, James G.; Nielsen, Morten Ørregaard

    We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real-valued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence...

  15. Improved rigorous upper bounds for transport due to passive advection described by simple models of bounded systems

    International Nuclear Information System (INIS)

    Kim, Chang-Bae; Krommes, J.A.

    1988-08-01

    The work of Krommes and Smith on rigorous upper bounds for the turbulent transport of a passively advected scalar [/ital Ann. Phys./ 177:246 (1987)] is extended in two directions: (1) For their ''reference model,'' improved upper bounds are obtained by utilizing more sophisticated two-time constraints which include the effects of cross-correlations up to fourth order. Numerical solutions of the model stochastic differential equation are also obtained; they show that the new bounds compare quite favorably with the exact results, even at large Reynolds and Kubo numbers. (2) The theory is extended to take account of a finite spatial autocorrelation length L/sub c/. As a reasonably generic example, the problem of particle transport due to statistically specified stochastic magnetic fields in a collisionless turbulent plasma is revisited. A bound is obtained which reduces for small L/sub c/ to the quasilinear limit and for large L/sub c/ to the strong turbulence limit, and which provides a reasonable and rigorous interpolation for intermediate values of L/sub c/. 18 refs., 6 figs

  16. How Individual Scholars Can Reduce the Rigor-Relevance Gap in Management Research

    OpenAIRE

    Wolf, Joachim; Rosenberg, Timo

    2012-01-01

    This paper discusses a number of avenues management scholars could follow to reduce the existing gap between scientific rigor and practical relevance without relativizing the importance of the first goal dimension. Such changes are necessary because many management studies do not fully exploit the possibilities to increase their practical relevance while maintaining scientific rigor. We argue that this rigor-relevance gap is not only the consequence of the currently prevailing institutional c...

  17. Static Load Test on Instrumented Pile - Field Data and Numerical Simulations

    Science.gov (United States)

    Krasiński, Adam; Wiszniewski, Mateusz

    2017-09-01

    Static load tests on foundation piles are generally carried out in order to determine load - the displacement characteristic of the pile head. For standard (basic) engineering practices this type of test usually provides enough information. However, the knowledge of force distribution along the pile core and its division into the friction along the shaft and the resistance under the base can be very useful. Such information can be obtained by strain gage pile instrumentation [1]. Significant investigations have been completed on this technology, proving its utility and correctness [8], [10], [12]. The results of static tests on instrumented piles are not easy to interpret. There are many factors and processes affecting the final outcome. In order to understand better the whole testing process and soil-structure behavior some investigations and numerical analyses were done. In the paper, real data from a field load test on instrumented piles is discussed and compared with numerical simulation of such a test in similar conditions. Differences and difficulties in the results interpretation with their possible reasons are discussed. Moreover, the authors used their own analytical solution for more reliable determination of force distribution along the pile. The work was presented at the XVII French-Polish Colloquium of Soil and Rock Mechanics, Łódź, 28-30 November 2016.

  18. Onset of rigor mortis is earlier in red muscle than in white muscle.

    Science.gov (United States)

    Kobayashi, M; Takatori, T; Nakajima, M; Sakurada, K; Hatanaka, K; Ikegaya, H; Matsuda, Y; Iwase, H

    2000-01-01

    Rigor mortis is thought to be related to falling ATP levels in muscles postmortem. We measured rigor mortis as tension determined isometrically in three rat leg muscles in liquid paraffin kept at 37 degrees C or 25 degrees C--two red muscles, red gastrocnemius (RG) and soleus (SO) and one white muscle, white gastrocnemius (WG). Onset, half and full rigor mortis occurred earlier in RG and SO than in WG both at 37 degrees C and at 25 degrees C even though RG and WG were portions of the same muscle. This suggests that rigor mortis directly reflects the postmortem intramuscular ATP level, which decreases more rapidly in red muscle than in white muscle after death. Rigor mortis was more retarded at 25 degrees C than at 37 degrees C in each type of muscle.

  19. Photoconductivity of amorphous silicon-rigorous modelling

    International Nuclear Information System (INIS)

    Brada, P.; Schauer, F.

    1991-01-01

    It is our great pleasure to express our gratitude to Prof. Grigorovici, the pioneer of the exciting field of amorphous state by our modest contribution to this area. In this paper are presented the outline of the rigorous modelling program of the steady-state photoconductivity in amorphous silicon and related materials. (Author)

  20. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing

    Science.gov (United States)

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko

    2007-10-01

    The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak secondary wave reproduction in some cases) consistency of the numerical modelling results with known stationary linear test solutions. Numerical stability of the developed model is investigated with respect to the reference state choice, modelling dynamics of a stationary front. The horizontally area-mean reference temperature proves to be the optimal stability warrant. The numerical scheme with explicit residual in the vertical forcing term becomes unstable for cross-frontal temperature differences exceeding 30 K. Stability is restored, if the vertical forcing is treated implicitly, which enables to use time steps, comparable with the hydrostatic SISL.

  1. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    Science.gov (United States)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  2. Reframing Rigor: A Modern Look at Challenge and Support in Higher Education

    Science.gov (United States)

    Campbell, Corbin M.; Dortch, Deniece; Burt, Brian A.

    2018-01-01

    This chapter describes the limitations of the traditional notions of academic rigor in higher education, and brings forth a new form of rigor that has the potential to support student success and equity.

  3. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    Science.gov (United States)

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2018-01-01

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence

  4. Testing numerical evolution with the shifted gauge wave

    Energy Technology Data Exchange (ETDEWEB)

    Babiuc, Maria C [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Szilagyi, Bela [Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, 14476 Golm (Germany); Winicour, Jeffrey [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States)

    2006-08-21

    Computational methods are essential to provide waveforms from coalescing black holes, which are expected to produce strong signals for the gravitational wave observatories being developed. Although partial simulations of the coalescence have been reported, scientifically useful waveforms have so far not been delivered. The goal of the AppleswithApples (AwA) Alliance is to design, coordinate and document standardized code tests for comparing numerical relativity codes. The first round of AwA tests has now been completed and the results are being analysed. These initial tests are based upon the periodic boundary conditions designed to isolate performance of the main evolution code. Here we describe and carry out an additional test with periodic boundary conditions which deals with an essential feature of the black hole excision problem, namely a non-vanishing shift. The test is a shifted version of the existing AwA gauge wave test. We show how a shift introduces an exponentially growing instability which violates the constraints of a standard harmonic formulation of Einstein's equations. We analyse the Cauchy problem in a harmonic gauge and discuss particular options for suppressing instabilities in the gauge wave tests. We implement these techniques in a finite difference evolution algorithm and present test results. Although our application here is limited to a model problem, the techniques should benefit the simulation of black holes using harmonic evolution codes.

  5. Fire exposed facades: Numerical modelling of the LEPIR2 testing facility

    Directory of Open Access Journals (Sweden)

    Dréan Virginie

    2016-01-01

    Full Text Available LEPIR2 testing facility is aimed to evaluate the fire behaviour of construction solutions implemented on facade according with the experimental evaluation required by the French Technical Specification 249 (IT249 of the safety regulation. It aims to limit the risks of fire spreading by facades to upper levels. This facility involves a wood crib fire in the lower compartment of a full scale two levels high structure. Flames are coming outside from the compartment through windows openings and develop in front of the facade. Computational fluids dynamics simulations are carried out with the FDS code (Fire Dynamics Simulator for two full-scale experiments performed by Efectis France laboratory. The first objective of this study is to evaluate the ability of numerical model to reproduce quantitative results in terms of gas temperatures and heat flux on the tested facade for further evaluation of fire performances of an insulation solution. When experimental results are compared with numerical calculations, good agreement is found out for every quantities and each test. The proposed models for wood cribs and geometry give correct thermal loads and flames shape near the tested facade.

  6. Numerical Simulation of Partially-Coherent Broadband Optical Imaging Using the FDTD Method

    Science.gov (United States)

    Çapoğlu, İlker R.; White, Craig A.; Rogers, Jeremy D.; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2012-01-01

    Rigorous numerical modeling of optical systems has attracted interest in diverse research areas ranging from biophotonics to photolithography. We report the full-vector electromagnetic numerical simulation of a broadband optical imaging system with partially-coherent and unpolarized illumination. The scattering of light from the sample is calculated using the finite-difference time-domain (FDTD) numerical method. Geometrical optics principles are applied to the scattered light to obtain the intensity distribution at the image plane. Multilayered object spaces are also supported by our algorithm. For the first time, numerical FDTD calculations are directly compared to and shown to agree well with broadband experimental microscopy results. PMID:21540939

  7. Numerical Evaluation of a Light-Gas Gun Facility for Impact Test

    Directory of Open Access Journals (Sweden)

    C. Rahner

    2014-01-01

    Full Text Available Experimental tests which match the application conditions might be used to properly evaluate materials for specific applications. High velocity impacts can be simulated using light-gas gun facilities, which come in different types and complexities. In this work different setups for a one-stage light-gas gun facility have been numerically analyzed in order to evaluate their suitability for testing materials and composites used as armor protection. A maximal barrel length of 6 m and a maximal reservoir pressure of a standard industrial gas bottle (20 MPa were chosen as limitations. The numerical predictions show that it is not possible to accelerate the projectile directly to the desired velocity with nitrogen, helium, or hydrogen as propellant gas. When using a sabot corresponding to a higher bore diameter, the necessary velocity is achievable with helium and hydrogen gases.

  8. Rigor force responses of permeabilized fibres from fast and slow skeletal muscles of aged rats.

    Science.gov (United States)

    Plant, D R; Lynch, G S

    2001-09-01

    1. Ageing is generally associated with a decline in skeletal muscle mass and strength and a slowing of muscle contraction, factors that impact upon the quality of life for the elderly. The mechanisms underlying this age-related muscle weakness have not been fully resolved. The purpose of the present study was to determine whether the decrease in muscle force as a consequence of age could be attributed partly to a decrease in the number of cross-bridges participating during contraction. 2. Given that the rigor force is proportional to the approximate total number of interacting sites between the actin and myosin filaments, we tested the null hypothesis that the rigor force of permeabilized muscle fibres from young and old rats would not be different. 3. Permeabilized fibres from the extensor digitorum longus (fast-twitch; EDL) and soleus (predominantly slow-twitch) muscles of young (6 months of age) and old (27 months of age) male F344 rats were activated in Ca2+-buffered solutions to determine force-pCa characteristics (where pCa = -log(10)[Ca2+]) and then in solutions lacking ATP and Ca2+ to determine rigor force levels. 4. The rigor forces for EDL and soleus muscle fibres were not different between young and old rats, indicating that the approximate total number of cross-bridges that can be formed between filaments did not decline with age. We conclude that the age-related decrease in force output is more likely attributed to a decrease in the force per cross-bridge and/or decreases in the efficiency of excitation-contraction coupling.

  9. Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.

    Science.gov (United States)

    Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P

    2018-03-03

    Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Numerical methods for metamaterial design

    CERN Document Server

    2013-01-01

    This book describes a relatively new approach for the design of electromagnetic metamaterials.  Numerical optimization routines are combined with electromagnetic simulations to tailor the broadband optical properties of a metamaterial to have predetermined responses at predetermined wavelengths. After a review of both the major efforts within the field of metamaterials and the field of mathematical optimization, chapters covering both gradient-based and derivative-free design methods are considered.  Selected topics including surrogate-base optimization, adaptive mesh search, and genetic algorithms are shown to be effective, gradient-free optimization strategies.  Additionally, new techniques for representing dielectric distributions in two dimensions, including level sets, are demonstrated as effective methods for gradient-based optimization.  Each chapter begins with a rigorous review of the optimization strategy used, and is followed by numerous examples that combine the strategy with either electromag...

  11. Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).

    Science.gov (United States)

    Suzutani, T; Ishibashi, H; Takatori, T

    1978-11-01

    The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.

  12. Friction stir welding of AA6082-T6 sheets: Numerical analysis and experimental tests

    International Nuclear Information System (INIS)

    Buffa, G.; Fratini, L.

    2004-01-01

    3D numerical simulation of the Friction Stir Welding process is developed with the aim to highlight the process mechanics in terms of metal flux and temperature, strain and strain rate distributions. The numerical results have been validated though a set of experimental tests

  13. Rigor mortis in an unusual position: Forensic considerations.

    Science.gov (United States)

    D'Souza, Deepak H; Harish, S; Rajesh, M; Kiran, J

    2011-07-01

    We report a case in which the dead body was found with rigor mortis in an unusual position. The dead body was lying on its back with limbs raised, defying gravity. Direction of the salivary stains on the face was also defying the gravity. We opined that the scene of occurrence of crime is unlikely to be the final place where the dead body was found. The clues were revealing a homicidal offence and an attempt to destroy the evidence. The forensic use of 'rigor mortis in an unusual position' is in furthering the investigations, and the scientific confirmation of two facts - the scene of death (occurrence) is different from the scene of disposal of dead body, and time gap between the two places.

  14. A new look at the statistical assessment of approximate and rigorous methods for the estimation of stabilized formation temperatures in geothermal and petroleum wells

    International Nuclear Information System (INIS)

    Espinoza-Ojeda, O M; Santoyo, E; Andaverde, J

    2011-01-01

    Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates

  15. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles

    Science.gov (United States)

    Choi, Yun-Sang

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (psalting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle. PMID:26761884

  16. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles.

    Science.gov (United States)

    Kim, Hyun-Wook; Hwang, Ko-Eun; Song, Dong-Heon; Kim, Yong-Jae; Ham, Youn-Kyung; Yeo, Eui-Joo; Jeong, Tae-Jun; Choi, Yun-Sang; Kim, Cheon-Jei

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (prigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle.

  17. Nonlinear reaction-diffusion equations with delay: some theorems, test problems, exact and numerical solutions

    Science.gov (United States)

    Polyanin, A. D.; Sorokin, V. G.

    2017-12-01

    The paper deals with nonlinear reaction-diffusion equations with one or several delays. We formulate theorems that allow constructing exact solutions for some classes of these equations, which depend on several arbitrary functions. Examples of application of these theorems for obtaining new exact solutions in elementary functions are provided. We state basic principles of construction, selection, and use of test problems for nonlinear partial differential equations with delay. Some test problems which can be suitable for estimating accuracy of approximate analytical and numerical methods of solving reaction-diffusion equations with delay are presented. Some examples of numerical solutions of nonlinear test problems with delay are considered.

  18. Dynamic Brazilian Test of Rock Under Intermediate Strain Rate: Pendulum Hammer-Driven SHPB Test and Numerical Simulation

    Science.gov (United States)

    Zhu, W. C.; Niu, L. L.; Li, S. H.; Xu, Z. H.

    2015-09-01

    The tensile strength of rock subjected to dynamic loading constitutes many engineering applications such as rock drilling and blasting. The dynamic Brazilian test of rock specimens was conducted with the split Hopkinson pressure bar (SHPB) driven by pendulum hammer, in order to determine the indirect tensile strength of rock under an intermediate strain rate ranging from 5.2 to 12.9 s-1, which is achieved when the incident bar is impacted by pendulum hammer with different velocities. The incident wave excited by pendulum hammer is triangular in shape, featuring a long rising time, and it is considered to be helpful for achieving a constant strain rate in the rock specimen. The dynamic indirect tensile strength of rock increases with strain rate. Then, the numerical simulator RFPA-Dynamics, a well-recognized software for simulating the rock failure under dynamic loading, is validated by reproducing the Brazilian test of rock when the incident stress wave retrieved at the incident bar is input as the boundary condition, and then it is employed to study the Brazilian test of rock under the higher strain rate. Based on the numerical simulation, the strain-rate dependency of tensile strength and failure pattern of the Brazilian disc specimen under the intermediate strain rate are numerically simulated, and the associated failure mechanism is clarified. It is deemed that the material heterogeneity should be a reason for the strain-rate dependency of rock.

  19. Scattering of atoms by a stationary sinusoidal hard wall: Rigorous treatment in (n+1) dimensions and comparison with the Rayleigh method

    International Nuclear Information System (INIS)

    Goodman, F.O.

    1977-01-01

    A rigorous treatment of the scattering of atoms by a stationary sinusoidal hard wall in (n+1) dimensions is presented, a previous treatment by Masel, Merrill, and Miller for n=1 being contained as a special case. Numerical comparisons are made with the GR method of Garcia, which incorporates the Rayleigh hypothesis. Advantages and disadvantages of both methods are discussed, and it is concluded that the Rayleigh GR method, if handled properly, will probably work satisfactorily in physically realistic cases

  20. Moving beyond Data Transcription: Rigor as Issue in Representation of Digital Literacies

    Science.gov (United States)

    Hagood, Margaret Carmody; Skinner, Emily Neil

    2015-01-01

    Rigor in qualitative research has been based upon criteria of credibility, dependability, confirmability, and transferability. Drawing upon articles published during our editorship of the "Journal of Adolescent & Adult Literacy," we illustrate how the use of digital data in research study reporting may enhance these areas of rigor,…

  1. Numerical analysis of thermal response tests with a groundwater flow and heat transfer model

    Energy Technology Data Exchange (ETDEWEB)

    Raymond, J.; Therrien, R. [Departement de Geologie et de Genie Ggeologique, Universite Laval, 1065 avenue de la medecine, Quebec (Qc) G1V 0A6 (Canada); Gosselin, L. [Departement de Genie Mecanique, Universite Laval, 1065 avenue de la medecine, Quebec (Qc) G1V 0A6 (Canada); Lefebvre, R. [Institut National de la Recherche Scientifique, Centre Eau Terre Environnement, 490 de la Couronne, Quebec (Qc) G1K 9A9 (Canada)

    2011-01-15

    The Kelvin line-source equation, used to analyze thermal response tests, describes conductive heat transfer in a homogeneous medium with a constant temperature at infinite boundaries. The equation is based on assumptions that are valid for most ground-coupled heat pump environments with the exception of geological settings where there is significant groundwater flow, heterogeneous distribution of subsurface properties, a high geothermal gradient or significant atmospheric temperature variations. To address these specific cases, an alternative method to analyze thermal response tests was developed. The method consists in estimating parameters by reproducing the output temperature signal recorded during a test with a numerical groundwater flow and heat transfer model. The input temperature signal is specified at the entrance of the ground heat exchanger, where flow and heat transfer are computed in 2D planes representing piping and whose contributions are added to the 3D porous medium. Results obtained with this method are compared to those of the line-source model for a test performed under standard conditions. A second test conducted in waste rock at the South Dump of the Doyon Mine, where conditions deviate from the line-source assumptions, is analyzed with the numerical model. The numerical model improves the representation of the physical processes involved during a thermal response test compared to the line-source equation, without a significant increase in computational time. (author)

  2. Numerical simulation of mechanical properties tests of tungsten mud waste geopolymer

    Science.gov (United States)

    Paszek, Natalia; Krystek, Małgorzata

    2018-03-01

    Geopolymers are believed to become in the future an environmental friendly alternative for the concrete. The low CO2 emission during the production process and the possibility of ecological management of the industrial wastes are mentioned as main advantages of geopolymers. The main drawback, causing problems with application of geopolymers as a building material is the lack of the theoretical material model. Indicated problem is being solved now by the group of scientists from the Silesian University of Technology. The series of laboratory tests are carried out within the European research project REMINE. The paper introduces the numerical analyses of tungsten mud waste geopolymer samples which have been performed in the Atena software on the basis of the laboratory tests. Numerical models of bended and compressed samples of different shapes are presented in the paper. The results obtained in Atena software were compared with results obtained in Abaqus and Mafem3D software.

  3. Numerical study of wave propagation around an underground cavity: acoustic case

    Science.gov (United States)

    Esterhazy, Sofi; Perugia, Ilaria; Schöberl, Joachim; Bokelmann, Götz

    2015-04-01

    physical equations and the numerical algorithms it is possible for us to investigate the wave field over a large bandwidth of wave numbers. This means we can apply our calculations for a wide range of parameters, while keeping the numerical error explicitly under control. The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  4. Advanced Dynamics Analytical and Numerical Calculations with MATLAB

    CERN Document Server

    Marghitu, Dan B

    2012-01-01

    Advanced Dynamics: Analytical and Numerical Calculations with MATLAB provides a thorough, rigorous presentation of kinematics and dynamics while using MATLAB as an integrated tool to solve problems. Topics presented are explained thoroughly and directly, allowing fundamental principles to emerge through applications from areas such as multibody systems, robotics, spacecraft and design of complex mechanical devices. This book differs from others in that it uses symbolic MATLAB for both theory and applications. Special attention is given to solutions that are solved analytically and numerically using MATLAB. The illustrations and figures generated with MATLAB reinforce visual learning while an abundance of examples offer additional support. This book also: Provides solutions analytically and numerically using MATLAB Illustrations and graphs generated with MATLAB reinforce visual learning for students as they study Covers modern technical advancements in areas like multibody systems, robotics, spacecraft and des...

  5. Trends in Methodological Rigor in Intervention Research Published in School Psychology Journals

    Science.gov (United States)

    Burns, Matthew K.; Klingbeil, David A.; Ysseldyke, James E.; Petersen-Brown, Shawna

    2012-01-01

    Methodological rigor in intervention research is important for documenting evidence-based practices and has been a recent focus in legislation, including the No Child Left Behind Act. The current study examined the methodological rigor of intervention research in four school psychology journals since the 1960s. Intervention research has increased…

  6. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  7. Development of a numerical pump testing framework.

    Science.gov (United States)

    Kaufmann, Tim A S; Gregory, Shaun D; Büsen, Martin R; Tansley, Geoff D; Steinseifer, Ulrich

    2014-09-01

    It has been shown that left ventricular assist devices (LVADs) increase the survival rate in end-stage heart failure patients. However, there is an ongoing demand for an increased quality of life, fewer adverse events, and more physiological devices. These challenges necessitate new approaches during the design process. In this study, computational fluid dynamics (CFD), lumped parameter (LP) modeling, mock circulatory loops (MCLs), and particle image velocimetry (PIV) are combined to develop a numerical Pump Testing Framework (nPTF) capable of analyzing local flow patterns and the systemic response of LVADs. The nPTF was created by connecting a CFD model of the aortic arch, including an LVAD outflow graft to an LP model of the circulatory system. Based on the same geometry, a three-dimensional silicone model was crafted using rapid prototyping and connected to an MCL. PIV studies of this setup were performed to validate the local flow fields (PIV) and the systemic response (MCL) of the nPTF. After validation, different outflow graft positions were compared using the nPTF. Both the numerical and the experimental setup were able to generate physiological responses by adjusting resistances and systemic compliance, with mean aortic pressures of 72.2-132.6 mm Hg for rotational speeds of 2200-3050 rpm. During LVAD support, an average flow to the distal branches (cerebral and subclavian) of 24% was found in the experiments and the nPTF. The flow fields from PIV and CFD were in good agreement. Numerical and experimental tools were combined to develop and validate the nPTF, which can be used to analyze local flow fields and the systemic response of LVADs during the design process. This allows analysis of physiological control parameters at early development stages and may, therefore, help to improve patient outcomes. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. Static Load Test on Instrumented Pile – Field Data and Numerical Simulations

    Directory of Open Access Journals (Sweden)

    Krasiński Adam

    2017-09-01

    Full Text Available Static load tests on foundation piles are generally carried out in order to determine load – the displacement characteristic of the pile head. For standard (basic engineering practices this type of test usually provides enough information. However, the knowledge of force distribution along the pile core and its division into the friction along the shaft and the resistance under the base can be very useful. Such information can be obtained by strain gage pile instrumentation [1]. Significant investigations have been completed on this technology, proving its utility and correctness [8], [10], [12]. The results of static tests on instrumented piles are not easy to interpret. There are many factors and processes affecting the final outcome. In order to understand better the whole testing process and soil-structure behavior some investigations and numerical analyses were done. In the paper, real data from a field load test on instrumented piles is discussed and compared with numerical simulation of such a test in similar conditions. Differences and difficulties in the results interpretation with their possible reasons are discussed. Moreover, the authors used their own analytical solution for more reliable determination of force distribution along the pile. The work was presented at the XVII French-Polish Colloquium of Soil and Rock Mechanics, Łódź, 28–30 November 2016.

  9. Numerical regulation of a test facility of materials for PWR

    International Nuclear Information System (INIS)

    Zauq, M.H.

    1982-02-01

    The installation aims at testing materials used in nuclear power plants; tests consists in simulations of a design basis accident (failure of a primary circuit of a PWR type reactor) for a qualification of these materials. A description of the test installation, of the thermodynamic control, and of the control system is presented. The organisation of the software is then given: description of the sequence chaining monitor, operation, list and function of the programs. The analog information processing is also presented (data transmission). A real-time microcomputer and clock are used for this work. The microprocessor is the 6800 of MOTOROLA. The microcomputer used has been built around the MC 6800; its structure is described. The data acquisition include an analog data acquisition system and a numerical data acquisition system. Laboratory and on-site tests are finally presented [fr

  10. Evaluation of spacer grid spring characteristics by means of physical tests and numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Schettino, Carlos Frederico Mattos, E-mail: carlosschettino@inb.gov.br [Industrias Nucleares do Brasil (INB), Resende, RJ (Brazil)

    2017-11-01

    Among all fuel assemblies' components, the spacer grids play an important structural role during the energy generation process, mainly due for its primary functional requirement, that is, to provide fuel rod support. The present work aims to evaluate the spring characteristics of a specific spacer grid design used in a PWR fuel assembly type 16 x 16. These spring characteristics comprises the load versus deflection capability and its spring rate, which are very important, and also mandatory, to be correctly established in order to preclude spacer grid spring and fuel rod cladding fretting during operation, as well as prevent an excessive fuel rod buckling. This study includes physical tests and numerical simulation. The tests were performed on an adapted load cell mechanical device, using as a specimen a single strap of the spacer grid. Three numerical models were prepared using the Finite Element Method, with the support of the commercial code ANSYS. One model was built to validate the simulation according to the performed physical test, the others were built inserting a gradient of temperature (Beginning Of Life hot condition) and to evaluate the spacer grid spring characteristics in End Of Life condition. The obtained results from physical test and numerical model have shown a good agreement between them, therefore validating the simulation. The obtained results from numerical models make available information regarding the spacer grid design purpose, such as the behavior of the fuel rod cladding support during operation. Therewith, these evaluations could be useful to improve the spacer grid design. (author)

  11. Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.

    Science.gov (United States)

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2016-05-17

    To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.

  12. Evaluation of spacer grid spring characteristics by means of physical tests and numerical simulation

    International Nuclear Information System (INIS)

    Schettino, Carlos Frederico Mattos

    2017-01-01

    Among all fuel assemblies' components, the spacer grids play an important structural role during the energy generation process, mainly due for its primary functional requirement, that is, to provide fuel rod support. The present work aims to evaluate the spring characteristics of a specific spacer grid design used in a PWR fuel assembly type 16 x 16. These spring characteristics comprises the load versus deflection capability and its spring rate, which are very important, and also mandatory, to be correctly established in order to preclude spacer grid spring and fuel rod cladding fretting during operation, as well as prevent an excessive fuel rod buckling. This study includes physical tests and numerical simulation. The tests were performed on an adapted load cell mechanical device, using as a specimen a single strap of the spacer grid. Three numerical models were prepared using the Finite Element Method, with the support of the commercial code ANSYS. One model was built to validate the simulation according to the performed physical test, the others were built inserting a gradient of temperature (Beginning Of Life hot condition) and to evaluate the spacer grid spring characteristics in End Of Life condition. The obtained results from physical test and numerical model have shown a good agreement between them, therefore validating the simulation. The obtained results from numerical models make available information regarding the spacer grid design purpose, such as the behavior of the fuel rod cladding support during operation. Therewith, these evaluations could be useful to improve the spacer grid design. (author)

  13. Dosimetric effects of edema in permanent prostate seed implants: a rigorous solution

    International Nuclear Information System (INIS)

    Chen Zhe; Yue Ning; Wang Xiaohong; Roberts, Kenneth B.; Peschel, Richard; Nath, Ravinder

    2000-01-01

    Purpose: To derive a rigorous analytic solution to the dosimetric effects of prostate edema so that its impact on the conventional pre-implant and post-implant dosimetry can be studied for any given radioactive isotope and edema characteristics. Methods and Materials: The edema characteristics observed by Waterman et al (Int. J. Rad. Onc. Biol. Phys, 41:1069-1077; 1998) was used to model the time evolution of the prostate and the seed locations. The total dose to any part of prostate tissue from a seed implant was calculated analytically by parameterizing the dose fall-off from a radioactive seed as a single inverse power function of distance, with proper account of the edema-induced time evolution. The dosimetric impact of prostate edema was determined by comparing the dose calculated with full consideration of prostate edema to that calculated with the conventional dosimetry approach where the seed locations and the target volume are assumed to be stationary. Results: A rigorous analytic solution on the relative dosimetric effects of prostate edema was obtained. This solution proved explicitly that the relative dosimetric effects of edema, as found in the previous numerical studies by Yue et. al. (Int. J. Radiat. Oncol. Biol. Phys. 43, 447-454, 1999), are independent of the size and the shape of the implant target volume and are independent of the number and the locations of the seeds implanted. It also showed that the magnitude of relative dosimetric effects is independent of the location of dose evaluation point within the edematous target volume. It implies that the relative dosimetric effects of prostate edema are universal with respect to a given isotope and edema characteristic. A set of master tables for the relative dosimetric effects of edema were obtained for a wide range of edema characteristics for both 125 I and 103 Pd prostate seed implants. Conclusions: A rigorous analytic solution of the relative dosimetric effects of prostate edema has been

  14. Cohesive Zone Model Based Numerical Analysis of Steel-Concrete Composite Structure Push-Out Tests

    Directory of Open Access Journals (Sweden)

    J. P. Lin

    2014-01-01

    Full Text Available Push-out tests were widely used to determine the shear bearing capacity and shear stiffness of shear connectors in steel-concrete composite structures. The finite element method was one efficient alternative to push-out testing. This paper focused on a simulation analysis of the interface between concrete slabs and steel girder flanges as well as the interface of the shear connectors and the surrounding concrete. A cohesive zone model was used to simulate the tangential sliding and normal separation of the interfaces. Then, a zero-thickness cohesive element was implemented via the user-defined element subroutine UEL in the software ABAQUS, and a multiple broken line mode was used to define the constitutive relations of the cohesive zone. A three-dimensional numerical analysis model was established for push-out testing to analyze the load-displacement curves of the push-out test process, interface relative displacement, and interface stress distribution. This method was found to accurately calculate the shear capacity and shear stiffness of shear connectors. The numerical results showed that the multiple broken lines mode cohesive zone model could describe the nonlinear mechanical behavior of the interface between steel and concrete and that a discontinuous deformation numerical simulation could be implemented.

  15. Drilling induced damage of core samples. Evidences from laboratory testing and numerical modelling

    International Nuclear Information System (INIS)

    Lanaro, Flavio

    2008-01-01

    Extensive sample testing in uniaxial and Brazilian test conditions were carried out for the Shobasama and MIU Research Laboratory Site (Gifu Pref., Japan). The compressive and tensile strength of the samples was observed to be negatively correlated to the in-situ stress components. Such correlation was interpreted as stress-release induced sample damage. Similar stress conditions were then numerically simulated by means of the BEM-DDM code FRACOD 2D in plane strain conditions. This method allows for explicitly consider the influence of newly initiated or propagating fractures on the stress field and deformation of the core during drilling process. The models show that already at moderate stress levels some fracturing of the core during drilling might occur leading to reduced laboratory strength of the samples. Sample damage maps were produced independently from the laboratory test results and from the numerical models and show good agreement with each other. (author)

  16. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    Science.gov (United States)

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  17. Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations.

    Science.gov (United States)

    Cypress, Brigitte S

    Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.

  18. Experimental testing and numerical simulation on natural composite for aerospace applications

    Science.gov (United States)

    Kumar, G. Raj; Vijayanandh, R.; Kumar, M. Senthil; Kumar, S. Sathish

    2018-05-01

    Nowadays polymers are commonly used in various applications, which make it difficult to avoid its usage even though it causes environmental problems. Natural fibers are best alternate to overcome the polymer based environmental issues. Natural fibers play an important role in developing high performing fully newline biodegradable green composites which will be a key material to solve environmental problems in future. In this paper deals the properties analysis of banana fiber is combined with epoxy resin in order to create a natural composite, which has special characteristics for aerospace applications. The objective of this paper is to investigate the characteristics of failure modes and strength of natural composite using experimental and numerical methods. The test specimen of natural composite has been fabricated as per ASTM standard, which undergoes tensile and compression tests using Tinius Olsen UTM in order to determine mechanical and physical properties. The reference model has been designed by CATIA, and then numerical simulation has been carried out by Ansys Workbench 16.2 for the given boundary conditions.

  19. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  20. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  1. Some rigorous results concerning spectral theory for ideal MHD

    International Nuclear Information System (INIS)

    Laurence, P.

    1986-01-01

    Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first- and second-order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first-order formulation satisfies the conditions of the Hille--Yosida theorem. A foundation is laid thereby within which the domains associated with the first- and second-order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0

  2. Some rigorous results concerning spectral theory for ideal MHD

    International Nuclear Information System (INIS)

    Laurence, P.

    1985-05-01

    Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first and second order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first order formulation satisfies the conditions of the Hille-Yosida theorem. A foundation is laid thereby within which the domains associated with the first and second order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0

  3. Using Project Complexity Determinations to Establish Required Levels of Project Rigor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Thomas D.

    2015-10-01

    This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.

  4. Upgrading geometry conceptual understanding and strategic competence through implementing rigorous mathematical thinking (RMT)

    Science.gov (United States)

    Nugraheni, Z.; Budiyono, B.; Slamet, I.

    2018-03-01

    To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.

  5. Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    International Nuclear Information System (INIS)

    Taylor, S.R.; Kamm, J.R.

    1993-11-01

    The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reached many of its goals. Individual papers submitted at the conference are indexed separately on the data base

  6. Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, S.R.; Kamm, J.R. [eds.

    1993-11-01

    The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reached many of its goals. Individual papers submitted at the conference are indexed separately on the data base.

  7. Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.

    Science.gov (United States)

    Morse, Janice M

    2015-09-01

    Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation. © The Author(s) 2015.

  8. Structural and numerical chromosome aberration inducers in liver micronucleus test in rats with partial hepatectomy.

    Science.gov (United States)

    Itoh, Satoru; Hattori, Chiharu; Nagata, Mayumi; Sanbuissho, Atsushi

    2012-08-30

    The liver micronucleus test is an important method to detect pro-mutagens such as active metabolites not reaching bone marrow due to their short lifespan. We have already reported that dosing of the test compound after partial hepatectomy (PH) is essential to detect genotoxicity of numerical chromosome aberration inducers in mice [Mutat. Res. 632 (2007) 89-98]. In naive animals, the proportion of binucleated cells in rats is less than half of that in mice, which suggests a species difference in the response to chromosome aberration inducers. In the present study, we investigated the responses to structural and numerical chromosome aberration inducers in the rat liver micronucleus test. Two structural chromosome aberretion inducers (diethylnitrosamine and 1,2-dimethylhydrazine) and two numerical chromosome aberration inducers (colchicine and carbendazim) were used in the present study. PH was performed a day before or after the dosing of the test compound in 8-week old male F344 rats and hepatocytes were isolated 4 days after the PH. As a result, diethylnitrosamine and 1,2-dimethylhydrazine, structural chromosome aberration inducers, exhibited significant increase in the incidence of micronucleated hepatocyte (MNH) when given either before and after PH. Colchicine and carbendazim, numerical chromosome aberration inducers, did not result in any toxicologically significant increase in MNH frequency when given before PH, while they exhibited MNH induction when given after PH. It is confirmed that dosing after PH is essential in order to detect genotoxicity of numerical chromosome aberration inducers in rats as well as in mice. Regarding the species difference, a different temporal response to colchicine was identified. Colchicine increased the incidence of MNH 4 days after PH in rats, although such induction in mice was observed 8-10 days after PH. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A Rigorous Investigation on the Ground State of the Penson-Kolb Model

    Science.gov (United States)

    Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi

    2003-05-01

    By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002

  10. Student’s rigorous mathematical thinking based on cognitive style

    Science.gov (United States)

    Fitriyani, H.; Khasanah, U.

    2017-12-01

    The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.

  11. Numerical analysis of thermal-hydrological conditions in the single heater test at Yucca Mountain

    International Nuclear Information System (INIS)

    Birkholzer, Jens T.; Tsang, Yvonne W.

    1998-01-01

    The Single Heater Test (SHT) is one of two in-situ thermal tests included in the site characterization program for the potential underground nuclear waste repository at Yucca Mountain. The heating phase of the SHT started in August 1996, and was completed in May 1997 after 9 months of heating. The coupled processes in the unsaturated fractured rock mass around the heater were monitored by numerous sensors for thermal, hydrological, mechanical and chemical data. In addition to passive monitoring, active testing of the rock mass moisture content was performed using geophysical methods and air injection testing. The extensive data set available from this test gives a unique opportunity to improve the understanding of the thermal-hydrological situation in the natural setting of the repository rocks. The present paper focuses on the 3-D numerical simulation of the thermal-hydrological processes in the SHT using TOUGH2. In the comparative analysis, they are particularly interested in the accuracy of different fracture-matrix-interaction concepts such as the Effective Continuum (ECM), the Dual Continuum (DKM), and the Multiple Interacting Continua (MINC) method

  12. Numerical Analysis Of The Resistance To Pullout Test Of Clinched Assemblies Of Thin Metal Sheets

    International Nuclear Information System (INIS)

    Jomaa, Moez; Billardon, Rene

    2007-01-01

    This paper presents the finite element analysis of the resistance of a clinch point to pullout test -that follows the numerical analysis of the forming process of the point-. The simulations have been validated by comparison with experimental evidences. The influence on the numerical predictions of various computation and process parameters have been evaluated

  13. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing

    OpenAIRE

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko

    2007-01-01

    The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak ...

  14. Rigorous Results for the Distribution of Money on Connected Graphs

    Science.gov (United States)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  15. The Relationship between Project-Based Learning and Rigor in STEM-Focused High Schools

    Science.gov (United States)

    Edmunds, Julie; Arshavsky, Nina; Glennie, Elizabeth; Charles, Karen; Rice, Olivia

    2016-01-01

    Project-based learning (PjBL) is an approach often favored in STEM classrooms, yet some studies have shown that teachers struggle to implement it with academic rigor. This paper explores the relationship between PjBL and rigor in the classrooms of ten STEM-oriented high schools. Utilizing three different data sources reflecting three different…

  16. Numerical analysis of hydrogen and methane propagation during testing of combustion engines

    Directory of Open Access Journals (Sweden)

    Dvořák V.

    2007-10-01

    Full Text Available The research of gas-fuelled combustion engines using hydrogen or methane require accordingly equipped test benches which take respect to the higher dangerous of self ignition accidents. This article deals with numerical calculations of flow in laboratory during simulated leakage of gas-fuel from fuel system of tested engine. The influences of local suction and influences of roof exhausters on the flow in the laboratory and on the gas propagation are discussed. Results obtained for hydrogen and for methane are compared. Conclusions for design and performance of suction devices and test benches are deduced from these results.

  17. Aviation Flight Test

    Data.gov (United States)

    Federal Laboratory Consortium — Redstone Test Center provides an expert workforce and technologically advanced test equipment to conduct the rigorous testing necessary for U.S. Army acquisition and...

  18. Development of new testing methods for the numerical load analysis for the drop test of steel sheet containers for the final repository Konrad

    International Nuclear Information System (INIS)

    Protz, C.; Voelzke, H.; Zencker, U.; Hagenow, P.; Gruenewald, H.

    2011-01-01

    The qualification of steel sheet containers as intermediate-level waste container for the final repository is performed by the BAM (Bundeasmt fuer Materialpruefung) according to the BfS (Bundesamt fuer Strahlenschutz) requirements. The testing requirements include the stacking pressure tests, lifting tests, drop tests thermal tests (fire resistance) and tightness tests. Besides the verification using model or prototype tests and transferability considerations numerical safety analyses may be performed alternatively. The authors describe the internal BAM research project ConDrop aimed to develop extended testing methods for the drop test of steel sheet containers for the final repository Konrad using numerical load analyses. A finite element model was developed using The FE software LS-PrePost 3.0 and ANSYS 12.0 and the software FE-Code LS-DYNA for the simulation of the drop test (5 m height). The results were verified by experimental data from instrumented drop tests. The container preserves its integrity after the drop test, plastic deformation occurred at the bottom plate, the side walls, the cask cover and the lateral uprights.

  19. Study of the quality characteristics in cold-smoked salmon (Salmo salar) originating from pre- or post-rigor raw material.

    Science.gov (United States)

    Birkeland, S; Akse, L

    2010-01-01

    Improved slaughtering procedures in the salmon industry have caused a delayed onset of rigor mortis and, thus, a potential for pre-rigor secondary processing. The aim of this study was to investigate the effect of rigor status at time of processing on quality traits color, texture, sensory, microbiological, in injection salted, and cold-smoked Atlantic salmon (Salmo salar). Injection of pre-rigor fillets caused a significant (Prigor processed fillets; however, post-rigor (1477 ± 38 g) fillets had a significant (P>0.05) higher fracturability than pre-rigor fillets (1369 ± 71 g). Pre-rigor fillets were significantly (Prigor fillets (37.8 ± 0.8) and had significantly lower (Prigor processed fillets. This study showed that similar quality characteristics can be obtained in cold-smoked products processed either pre- or post-rigor when using suitable injection salting protocols and smoking techniques. © 2010 Institute of Food Technologists®

  20. Experimental evaluation of rigor mortis. VIII. Estimation of time since death by repeated measurements of the intensity of rigor mortis on rats.

    Science.gov (United States)

    Krompecher, T

    1994-10-21

    The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.

  1. The MIXED framework: A novel approach to evaluating mixed-methods rigor.

    Science.gov (United States)

    Eckhardt, Ann L; DeVon, Holli A

    2017-10-01

    Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.

  2. ATP, IMP, and glycogen in cod muscle at onset and during development of rigor mortis depend on the sampling location

    DEFF Research Database (Denmark)

    Cappeln, Gertrud; Jessen, Flemming

    2002-01-01

    Variation in glycogen, ATP, and IMP contents within individual cod muscles were studied in ice stored fish during the progress of rigor mortis. Rigor index was determined before muscle samples for chemical analyzes were taken at 16 different positions on the fish. During development of rigor......, the contents of glycogen and ATP decreased differently in relation to rigor index depending on sampling location. Although fish were considered to be in strong rigor according to the rigor index method, parts of the muscle were not in rigor as high ATP concentrations were found in dorsal and tall muscle....

  3. Disciplining Bioethics: Towards a Standard of Methodological Rigor in Bioethics Research

    Science.gov (United States)

    Adler, Daniel; Shaul, Randi Zlotnik

    2012-01-01

    Contemporary bioethics research is often described as multi- or interdisciplinary. Disciplines are characterized, in part, by their methods. Thus, when bioethics research draws on a variety of methods, it crosses disciplinary boundaries. Yet each discipline has its own standard of rigor—so when multiple disciplinary perspectives are considered, what constitutes rigor? This question has received inadequate attention, as there is considerable disagreement regarding the disciplinary status of bioethics. This disagreement has presented five challenges to bioethics research. Addressing them requires consideration of the main types of cross-disciplinary research, and consideration of proposals aiming to ensure rigor in bioethics research. PMID:22686634

  4. Hartree-Fock-Bogoliubov model: a theoretical and numerical perspective

    International Nuclear Information System (INIS)

    Paul, S.

    2012-01-01

    This work is devoted to the theoretical and numerical study of Hartree-Fock-Bogoliubov (HFB) theory for attractive quantum systems, which is one of the main methods in nuclear physics. We first present the model and its main properties, and then explain how to get numerical solutions. We prove some convergence results, in particular for the simple fixed point algorithm (sometimes called Roothaan). We show that it converges, or oscillates between two states, none of them being a solution. This generalizes to the HFB case previous results of Cances and Le Bris for the simpler Hartree-Fock model in the repulsive case. Following these authors, we also propose a relaxed constraint algorithm for which convergence is guaranteed. In the last part of the thesis, we illustrate the behavior of these algorithms by some numerical experiments. We first consider a system where the particles only interact through the Newton potential. Our numerical results show that the pairing matrix never vanishes, a fact that has not yet been proved rigorously. We then study a very simplified model for protons and neutrons in a nucleus. (author)

  5. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    International Nuclear Information System (INIS)

    Havu, V.; Blum, V.; Havu, P.; Scheffler, M.

    2009-01-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

  6. Numerical modeling of thermal fatigue cracks from the viewpoint of eddy current testing

    International Nuclear Information System (INIS)

    Yusa, Noritaka; Hashizume, Hidetoshi; Virkkunen, Iikka; Kemppainen, Mika

    2012-01-01

    This study discusses a suitable numerical modeling of a thermal fatigue crack from the viewpoint of eddy current testing. Five artificial thermal fatigue cracks, introduced into type 304L austenitic stainless steel plates with a thickness of 25 mm, are prepared; and eddy current inspections are carried out to gather signals using an absolute type pancake probe and a differential type plus point probe. Finite element simulations are then carried out to evaluate a proper numerical model of the thermal fatigue cracks. In the finite element simulations, the thermal fatigue cracks are modeled as a semi-elliptic planar region on the basis of the results of the destructive tests. The width and internal conductivity are evaluated by the simulations. The results of the simulations reveal that the thermal fatigue cracks are regarded as almost nonconductive when the internal conductivity is assumed to be uniform inside. (author)

  7. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    Energy Technology Data Exchange (ETDEWEB)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  8. MASTERS OF ANALYTICAL TRADECRAFT: CERTIFYING THE STANDARDS AND ANALYTIC RIGOR OF INTELLIGENCE PRODUCTS

    Science.gov (United States)

    2016-04-01

    AU/ACSC/2016 AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY MASTERS OF ANALYTICAL TRADECRAFT: CERTIFYING THE STANDARDS AND ANALYTIC RIGOR OF...establishing unit level certified Masters of Analytic Tradecraft (MAT) analysts to be trained and entrusted to evaluate and rate the standards and...cues) ideally should meet or exceed effective rigor (based on analytical process).4 To accomplish this, decision makers should not be left to their

  9. Uncertainty Analysis of OC5-DeepCwind Floating Semisubmersible Offshore Wind Test Campaign: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-07-26

    This paper examines how to assess the uncertainty levels for test measurements of the Offshore Code Comparison, Continued, with Correlation (OC5)-DeepCwind floating offshore wind system, examined within the OC5 project. The goal of the OC5 project was to validate the accuracy of ultimate and fatigue load estimates from a numerical model of the floating semisubmersible using data measured during scaled tank testing of the system under wind and wave loading. The examination of uncertainty was done after the test, and it was found that the limited amount of data available did not allow for an acceptable uncertainty assessment. Therefore, this paper instead qualitatively examines the sources of uncertainty associated with this test to start a discussion of how to assess uncertainty for these types of experiments and to summarize what should be done during future testing to acquire the information needed for a proper uncertainty assessment. Foremost, future validation campaigns should initiate numerical modeling before testing to guide the test campaign, which should include a rigorous assessment of uncertainty, and perform validation during testing to ensure that the tests address all of the validation needs.

  10. Electrocardiogram artifact caused by rigors mimicking narrow complex tachycardia: a case report.

    Science.gov (United States)

    Matthias, Anne Thushara; Indrakumar, Jegarajah

    2014-02-04

    The electrocardiogram (ECG) is useful in the diagnosis of cardiac and non-cardiac conditions. Rigors due to shivering can cause electrocardiogram artifacts mimicking various cardiac rhythm abnormalities. We describe an 80-year-old Sri Lankan man with an abnormal electrocardiogram mimicking narrow complex tachycardia during the immediate post-operative period. Electrocardiogram changes caused by muscle tremor during rigors could mimic a narrow complex tachycardia. Identification of muscle tremor as a cause of electrocardiogram artifact can avoid unnecessary pharmacological and non-pharmacological intervention to prevent arrhythmias.

  11. Effects of Pre and Post-Rigor Marinade Injection on Some Quality Parameters of Longissimus Dorsi Muscles

    Science.gov (United States)

    Fadıloğlu, Eylem Ezgi; Serdaroğlu, Meltem

    2018-01-01

    Abstract This study was conducted to evaluate the effects of pre and post-rigor marinade injections on some quality parameters of Longissimus dorsi (LD) muscles. Three marinade formulations were prepared with 2% NaCl, 2% NaCl+0.5 M lactic acid and 2% NaCl+0.5 M sodium lactate. In this study marinade uptake, pH, free water, cooking loss, drip loss and color properties were analyzed. Injection time had significant effect on marinade uptake levels of samples. Regardless of marinate formulation, marinade uptake of pre-rigor samples injected with marinade solutions were higher than post rigor samples. Injection of sodium lactate increased pH values of samples whereas lactic acid injection decreased pH. Marinade treatment and storage period had significant effect on cooking loss. At each evaluation period interaction between marinade treatment and injection time showed different effect on free water content. Storage period and marinade application had significant effect on drip loss values. Drip loss in all samples increased during the storage. During all storage days, lowest CIE L* value was found in pre-rigor samples injected with sodium lactate. Lactic acid injection caused color fade in pre-rigor and post-rigor samples. Interaction between marinade treatment and storage period was found statistically significant (p<0.05). At day 0 and 3, the lowest CIE b* values obtained pre-rigor samples injected with sodium lactate and there were no differences were found in other samples. At day 6, no significant differences were found in CIE b* values of all samples. PMID:29805282

  12. Use of the Rigor Mortis Process as a Tool for Better Understanding of Skeletal Muscle Physiology: Effect of the Ante-Mortem Stress on the Progression of Rigor Mortis in Brook Charr (Salvelinus fontinalis).

    Science.gov (United States)

    Diouf, Boucar; Rioux, Pierre

    1999-01-01

    Presents the rigor mortis process in brook charr (Salvelinus fontinalis) as a tool for better understanding skeletal muscle metabolism. Describes an activity that demonstrates how rigor mortis is related to the post-mortem decrease of muscular glycogen and ATP, how glycogen degradation produces lactic acid that lowers muscle pH, and how…

  13. Measurements of the degree of development of rigor mortis as an indicator of stress in slaughtered pigs.

    Science.gov (United States)

    Warriss, P D; Brown, S N; Knowles, T G

    2003-12-13

    The degree of development of rigor mortis in the carcases of slaughter pigs was assessed subjectively on a three-point scale 35 minutes after they were exsanguinated, and related to the levels of cortisol, lactate and creatine kinase in blood collected at exsanguination. Earlier rigor development was associated with higher concentrations of these stress indicators in the blood. This relationship suggests that the mean rigor score, and the frequency distribution of carcases that had or had not entered rigor, could be used as an index of the degree of stress to which the pigs had been subjected.

  14. Numerical and tank test of a pivoted floating device for wave energy

    International Nuclear Information System (INIS)

    Coiro, Domenico P.; Calise, Giuseppe; Bizzarrini, Nadia; Troise, Giancarlo

    2015-01-01

    In this paper a system for extracting energy from waves is presented. The present work deals with numerical and experimental tests on a scaled model, performed in the DII towing tank facility. The device is made up of a floating body, which oscillates due to waves, and of a linear electromechanical generator. The electromechanical generator, based on ball-bearing screw, is linked both to the buoyant body and a fixed frame, converting relative movements of its anchor point in electrical power. Numerical analyses on such device have been performed in order to evaluate critical parameters for the system optimization, including analytical study of the system, potential flow and computational fluid dynamics (CFD) simulations, based on Reynolds Averaged Navier-Stokes (RANS), as well. [it

  15. Einstein's Theory A Rigorous Introduction for the Mathematically Untrained

    CERN Document Server

    Grøn, Øyvind

    2011-01-01

    This book provides an introduction to the theory of relativity and the mathematics used in its processes. Three elements of the book make it stand apart from previously published books on the theory of relativity. First, the book starts at a lower mathematical level than standard books with tensor calculus of sufficient maturity to make it possible to give detailed calculations of relativistic predictions of practical experiments. Self-contained introductions are given, for example vector calculus, differential calculus and integrations. Second, in-between calculations have been included, making it possible for the non-technical reader to follow step-by-step calculations. Thirdly, the conceptual development is gradual and rigorous in order to provide the inexperienced reader with a philosophically satisfying understanding of the theory.  Einstein's Theory: A Rigorous Introduction for the Mathematically Untrained aims to provide the reader with a sound conceptual understanding of both the special and genera...

  16. Sonoelasticity to monitor mechanical changes during rigor and ageing.

    Science.gov (United States)

    Ayadi, A; Culioli, J; Abouelkaram, S

    2007-06-01

    We propose the use of sonoelasticity as a non-destructive method to monitor changes in the resistance of muscle fibres, unaffected by connective tissue. Vibrations were applied at low frequency to induce oscillations in soft tissues and an ultrasound transducer was used to detect the motions. The experiments were carried out on the M. biceps femoris muscles of three beef cattle. In addition to the sonoelasticity measurements, the changes in meat during rigor and ageing were followed by measurements of both the mechanical resistance of myofibres and pH. The variations of mechanical resistance and pH were compared to those of the sonoelastic variables (velocity and attenuation) at two frequencies. The relationships between pH and velocity or attenuation and between the velocity or attenuation and the stress at 20% deformation were highly correlated. We concluded that sonoelasticity is a non-destructive method that can be used to monitor mechanical changes in muscle fibers during rigor-mortis and ageing.

  17. Electromagnetic scattering problems -Numerical issues and new experimental approaches of validation

    Energy Technology Data Exchange (ETDEWEB)

    Geise, Robert; Neubauer, Bjoern; Zimmer, Georg [University of Braunschweig, Institute for Electromagnetic Compatibility, Schleinitzstrasse 23, 38106 Braunschweig (Germany)

    2015-03-10

    Electromagnetic scattering problems, thus the question how radiated energy spreads when impinging on an object, are an essential part of wave propagation. Though the Maxwell’s differential equations as starting point, are actually quite simple,the integral formulation of an object’s boundary conditions, respectively the solution for unknown induced currents can only be solved numerically in most cases.As a timely topic of practical importance the scattering of rotating wind turbines is discussed, the numerical description of which is still based on rigorous approximations with yet unspecified accuracy. In this context the issue of validating numerical solutions is addressed, both with reference simulations but in particular with the experimental approach of scaled measurements. For the latter the idea of an incremental validation is proposed allowing a step by step validation of required new mathematical models in scattering theory.

  18. Understanding the seismic wave propagation inside and around an underground cavity from a 3D numerical survey

    Science.gov (United States)

    Esterhazy, Sofi; Schneider, Felix; Perugia, Ilaria; Bokelmann, Götz

    2017-04-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explosion/weapon testing, we aim to provide a basic numerical study of the wave propagation around and inside such an underground cavity. One method to investigate the geophysical properties of an underground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as "resonance seismometry" - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and so far, there are only very few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in numerical modeling of wave propagation problems. Our numerical study includes the full elastic wave field in three dimensions. We consider the effects from an incoming plane wave as well as point source located in the surrounding of the cavity at the surface. While the former can be considered as passive source like a tele-seismic earthquake, the latter represents a man-made explosion or a viborseis as used for/in active seismic techniques. Further we want to demonstrate the specific characteristics of the scattered wave field from a P-waves and S-wave separately. For our simulations in 3D we use the discontinuous Galerkin Spectral Element Code SPEED developed by MOX (The Laboratory for Modeling and Scientific Computing, Department of Mathematics) and DICA (Department of Civil and Environmental Engineering) at the Politecnico di Milano. The computations are carried out on the Vienna Scientific Cluster (VSC). The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an

  19. A rigorous proof for the Landauer-Büttiker formula

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Jensen, Arne; Moldoveanu, V.

    Recently, Avron et al. shed new light on the question of quantum transport in mesoscopic samples coupled to particle reservoirs by semi-infinite leads. They rigorously treat the case when the sample undergoes an adiabatic evolution thus generating a current through th leads, and prove the so call...

  20. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings

    Science.gov (United States)

    Kline, Joshua C.

    2014-01-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152

  1. SMD-based numerical stochastic perturbation theory

    Science.gov (United States)

    Dalla Brida, Mattia; Lüscher, Martin

    2017-05-01

    The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.

  2. SMD-based numerical stochastic perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Dalla Brida, Mattia [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano-Bicocca (Italy); Luescher, Martin [CERN, Theoretical Physics Department, Geneva (Switzerland); AEC, Institute for Theoretical Physics, University of Bern (Switzerland)

    2017-05-15

    The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schroedinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit. (orig.)

  3. SMD-based numerical stochastic perturbation theory

    International Nuclear Information System (INIS)

    Dalla Brida, Mattia; Luescher, Martin

    2017-01-01

    The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schroedinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit. (orig.)

  4. Numerical and experimental flow analysis in centifluidic systems for rapid allergy screening tests

    Directory of Open Access Journals (Sweden)

    Dethloff Manuel

    2015-09-01

    Full Text Available For the development of the automated processing of a membrane-based rapid allergy test, the flow characteristics in one part of the test, the reagents module, are analysed. This module consists of a multichannel system with several inputs and one output. A return flow from one input channel into another should be avoided. A valveless module with pointed channels at an angle of 12° is analysed with numerical and experimental methods with regard to the flow characteristics.

  5. Characterization of rigor mortis of longissimus dorsi and triceps ...

    African Journals Online (AJOL)

    24 h) of the longissimus dorsi (LD) and triceps brachii (TB) muscles as well as the shear force (meat tenderness) and colour were evaluated, aiming at characterizing the rigor mortis in the meat during industrial processing. Data statistic treatment demonstrated that carcass temperature and pH decreased gradually during ...

  6. Numerical modeling and experimental testing of a wave energy converter: deliverable D4.2

    Energy Technology Data Exchange (ETDEWEB)

    Zurkinden, A.S.; Kramer, M.; Ferri, F.; Kofoed, J.P.

    2013-05-15

    The objective of this document is to summarize the outcome of the research which has been carried out during the period May 2011 until June 2012 i.e. during the first year of the PhD study. The work has been done in collaboration with the co-authors. The aim of the project was primarily to provide numerical values for comparison with the experimental test results which were carried out in the same time. It is for this reason why Chapter 4 does consist exclusively of numerical values. Experimental values and measured time series of wave elevations have been used throughout the report in order to a) validate the numerical model and b) preform stochastic analysis. The latter technique is introduced in order to optimize the control parameters of the power take off system. (Author)

  7. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-31

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examples of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.

  8. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  9. [A new formula for the measurement of rigor mortis: the determination of the FRR-index (author's transl)].

    Science.gov (United States)

    Forster, B; Ropohl, D; Raule, P

    1977-07-05

    The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.

  10. Reconciling the Rigor-Relevance Dilemma in Intellectual Capital Research

    Science.gov (United States)

    Andriessen, Daniel

    2004-01-01

    This paper raises the issue of research methodology for intellectual capital and other types of management research by focusing on the dilemma of rigour versus relevance. The more traditional explanatory approach to research often leads to rigorous results that are not of much help to solve practical problems. This paper describes an alternative…

  11. Application of a numerical model in the interpretation of a leaky aquifer test

    International Nuclear Information System (INIS)

    Schroth, B.; Narasimhan, T.N.

    1997-01-01

    The potential use of numerical models in aquifer analysis is by no means a new concept; yet relatively few engineers and scientists are taking advantage of this powerful tool that is more convenient to use now than ever before. In this technical note the authors present an example of using a numerical model in an integrated analysis of data from a three-layer leaky aquifer system involving well-bore storage, skin effects, variable discharge, and observation wells in the pumped aquifer and in an unpumped aquifer. The modeling detail may differ for other cases. The intent is to show that interpretation can be achieved with reduced bias by reducing assumptions in regard to system geometry, flow rate, and other details. A multiwell aquifer test was carried out at a site on the western part of the Lawrence Livermore National Laboratory (LLNL), located about 60 kilometers east of San Francisco. The test was conducted to hydraulically characterize one part of the site and thus help develop remediation strategies to alleviate the ground-water contamination

  12. Numerical simulation supports formation testing planning; Simulacao numerica auxilia planejamento de teste de formacao

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Rogerio Marques; Fonseca, Carlos Eduardo da [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    A well test is an operation that allows the engineer assessing reservoir performance and fluids properties by measuring flow rates and pressures under a range of flowing conditions. In most well tests, a limited amount of fluid is allowed to flow from the formation being tested. The formation is isolated behind cemented casing and perforated at the formation depth or, in open hole, the formation is straddled by a pair of packers that isolate the formation. During the flow period, the pressure at the formation is monitored over time. Then, the formation is closed (or shut in) and the pressure monitored at the formation while the fluid within the formation equilibrates. The analysis of these pressure changes can provide information on the size and shape of the formation as well as its ability to produce fluids. . The flow of fluid through the column test causes your heating and hence its elongation. Several factors affect the rate of exchange of heat as well and the characteristics of the fluid, the flow of time and the flow and the existence of deep water. The prediction of temperature over well, in its various components, and the effect caused in the column test is not a trivial task. Some authors, for example, describe a method of calculating the behaviour of columns of production, making it simpler variation of constant temperature throughout the entire column, a fact that this does not occur in practice. The work aims at presenting the advantages of using the numerical simulation in determining the efforts and corresponding movements of the column of test of formation. (author)

  13. Numerical compliance testing of human exposure to electromagnetic radiation from smart-watches.

    Science.gov (United States)

    Hong, Seon-Eui; Lee, Ae-Kyoung; Kwon, Jong-Hwa; Pack, Jeong-Ki

    2016-10-07

    In this study, we investigated the electromagnetic dosimetry for smart-watches. At present, the standard for compliance testing of body-mounted and handheld devices specifies the use of a flat phantom to provide conservative estimates of the peak spatial-averaged specific absorption rate (SAR). This means that the estimated SAR using a flat phantom should be higher than the SAR in the exposure part of an anatomical human-body model. To verify this, we numerically calculated the SAR for a flat phantom and compared it with the numerical calculation of the SAR for four anatomical human-body models of different ages. The numerical analysis was performed using the finite difference time domain method (FDTD). The smart-watch models were used in the three antennas: the shorted planar inverted-F antenna (PIFA), loop antenna, and monopole antenna. Numerical smart-watch models were implemented for cellular commutation and wireless local-area network operation at 835, 1850, and 2450 MHz. The peak spatial-averaged SARs of the smart-watch models are calculated for the flat phantom and anatomical human-body model for the wrist-worn and next to mouth positions. The results show that the flat phantom does not provide a consistent conservative SAR estimate. We concluded that the difference in the SAR results between an anatomical human-body model and a flat phantom can be attributed to the different phantom shapes and tissue structures.

  14. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    International Nuclear Information System (INIS)

    Zilhao, Miguel; Herdeiro, Carlos; Witek, Helvi; Nerozzi, Andrea; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo

    2010-01-01

    The numerical evolution of Einstein's field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  15. Quality properties of pre- and post-rigor beef muscle after interventions with high frequency ultrasound.

    Science.gov (United States)

    Sikes, Anita L; Mawson, Raymond; Stark, Janet; Warner, Robyn

    2014-11-01

    The delivery of a consistent quality product to the consumer is vitally important for the food industry. The aim of this study was to investigate the potential for using high frequency ultrasound applied to pre- and post-rigor beef muscle on the metabolism and subsequent quality. High frequency ultrasound (600kHz at 48kPa and 65kPa acoustic pressure) applied to post-rigor beef striploin steaks resulted in no significant effect on the texture (peak force value) of cooked steaks as measured by a Tenderometer. There was no added benefit of ultrasound treatment above that of the normal ageing process after ageing of the steaks for 7days at 4°C. Ultrasound treatment of post-rigor beef steaks resulted in a darkening of fresh steaks but after ageing for 7days at 4°C, the ultrasound-treated steaks were similar in colour to that of the aged, untreated steaks. High frequency ultrasound (2MHz at 48kPa acoustic pressure) applied to pre-rigor beef neck muscle had no effect on the pH, but the calculated exhaustion factor suggested that there was some effect on metabolism and actin-myosin interaction. However, the resultant texture of cooked, ultrasound-treated muscle was lower in tenderness compared to the control sample. After ageing for 3weeks at 0°C, the ultrasound-treated samples had the same peak force value as the control. High frequency ultrasound had no significant effect on the colour parameters of pre-rigor beef neck muscle. This proof-of-concept study showed no effect of ultrasound on quality but did indicate that the application of high frequency ultrasound to pre-rigor beef muscle shows potential for modifying ATP turnover and further investigation is warranted. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  16. Rigorous derivation from Landau-de Gennes theory to Ericksen-Leslie theory

    OpenAIRE

    Wang, Wei; Zhang, Pingwen; Zhang, Zhifei

    2013-01-01

    Starting from Beris-Edwards system for the liquid crystal, we present a rigorous derivation of Ericksen-Leslie system with general Ericksen stress and Leslie stress by using the Hilbert expansion method.

  17. Numerical ductile tearing simulation of circumferential cracked pipe tests under dynamic loading conditions

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Hyun Suk; Kim, Ji Soo; Ryu, Ho Wan; Kim, Yun Jae [Dept. of Mechanical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Jin Weon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    This paper presents a numerical method to simulate ductile tearing in cracked components under high strain rates using finite element damage analysis. The strain rate dependence on tensile properties and multiaxial fracture strain is characterized by the model developed by Johnson and Cook. The damage model is then defined based on the ductility exhaustion concept using the strain rate dependent multiaxial fracture strain concept. The proposed model is applied to simulate previously published three cracked pipe bending test results under two different test speed conditions. Simulated results show overall good agreement with experimental results.

  18. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    Science.gov (United States)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  19. Striation Patterns of Ox Muscle in Rigor Mortis

    Science.gov (United States)

    Locker, Ronald H.

    1959-01-01

    Ox muscle in rigor mortis offers a selection of myofibrils fixed at varying degrees of contraction from sarcomere lengths of 3.7 to 0.7 µ. A study of this material by phase contrast and electron microscopy has revealed four distinct successive patterns of contraction, including besides the familiar relaxed and contracture patterns, two intermediate types (2.4 to 1.9 µ, 1.8 to 1.5 µ) not previously well described. PMID:14417790

  20. Mathematical framework for fast and rigorous track fit for the ZEUS detector

    Energy Technology Data Exchange (ETDEWEB)

    Spiridonov, Alexander

    2008-12-15

    In this note we present a mathematical framework for a rigorous approach to a common track fit for trackers located in the inner region of the ZEUS detector. The approach makes use of the Kalman filter and offers a rigorous treatment of magnetic field inhomogeneity, multiple scattering and energy loss. We describe mathematical details of the implementation of the Kalman filter technique with a reduced amount of computations for a cylindrical drift chamber, barrel and forward silicon strip detectors and a forward straw drift chamber. Options with homogeneous and inhomogeneous field are discussed. The fitting of tracks in one ZEUS event takes about of 20ms on standard PC. (orig.)

  1. Testing gravitational-wave searches with numerical relativity waveforms: results from the first Numerical INJection Analysis (NINJA) project

    International Nuclear Information System (INIS)

    Aylott, Benjamin; Baker, John G; Camp, Jordan; Centrella, Joan; Boggs, William D; Buonanno, Alessandra; Boyle, Michael; Buchman, Luisa T; Chu, Tony; Brady, Patrick R; Brown, Duncan A; Bruegmann, Bernd; Cadonati, Laura; Campanelli, Manuela; Faber, Joshua; Chatterji, Shourov; Christensen, Nelson; Diener, Peter; Dorband, Nils; Etienne, Zachariah B

    2009-01-01

    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave data analysis communities. The purpose of NINJA is to study the sensitivity of existing gravitational-wave search algorithms using numerically generated waveforms and to foster closer collaboration between the numerical relativity and data analysis communities. We describe the results of the first NINJA analysis which focused on gravitational waveforms from binary black hole coalescence. Ten numerical relativity groups contributed numerical data which were used to generate a set of gravitational-wave signals. These signals were injected into a simulated data set, designed to mimic the response of the initial LIGO and Virgo gravitational-wave detectors. Nine groups analysed this data using search and parameter-estimation pipelines. Matched filter algorithms, un-modelled-burst searches and Bayesian parameter estimation and model-selection algorithms were applied to the data. We report the efficiency of these search methods in detecting the numerical waveforms and measuring their parameters. We describe preliminary comparisons between the different search methods and suggest improvements for future NINJA analyses.

  2. Development status of the experimental and numerical load analysis of package units CASTOR registered under drop test conditions

    International Nuclear Information System (INIS)

    Voelzer, Walter; Schaefer, Marc; Rumanus, Erkan; Liedtke, Ralph; Brehmer, Frank

    2012-01-01

    The mechanical integrity of package units CASTOR registered for a 9-m drop test under accident conditions has to be demonstrated according the requirements of IAEA among others. For reduction of the loads the containers have to be equipped with shock absorbers on the bottom and top sides. The determination of loads under drop test conditions can be performed with experimental or numerical methods. Due to the complexity of the load state and the verification of results both methods are usually used for integrity demonstration. The numerical codes have to model the short-term dynamic behavior of the whole container for different drop orientations and temperatures, local stress states have to be quantifiable for assessment. One of the problems is the modeling of the material behavior of wood that is used in the shock absorbers. The so far used energetic calculation approach will be replaced by a dynamic approach, the numerical models will have to be verified by experimental drop tests.

  3. Learning from Science and Sport - How we, Safety, "Engage with Rigor"

    Science.gov (United States)

    Herd, A.

    2012-01-01

    As the world of spaceflight safety is relatively small and potentially inward-looking, we need to be aware of the "outside world". We should then try to remind ourselves to be open to the possibility that data, knowledge or experience from outside of the spaceflight community may provide some constructive alternate perspectives. This paper will assess aspects from two seemingly tangential fields, science and sport, and align these with the world of safety. In doing so some useful insights will be given to the challenges we face and may provide solutions relevant in our everyday (of safety engineering). Sport, particularly a contact sport such as rugby union, requires direct interaction between members of two (opposing) teams. Professional, accurately timed and positioned interaction for a desired outcome. These interactions, whilst an essential part of the game, are however not without their constraints. The rugby scrum has constraints as to the formation and engagement of the two teams. The controlled engagement provides for an interaction between the two teams in a safe manner. The constraints arising from the reality that an incorrect engagement could cause serious injury to members of either team. In academia, scientific rigor is applied to assure that the arguments provided and the conclusions drawn in academic papers presented for publication are valid, legitimate and credible. The scientific goal of the need for rigor may be expressed in the example of achieving a statistically relevant sample size, n, in order to assure analysis validity of the data pool. A failure to apply rigor could then place the entire study at risk of failing to have the respective paper published. This paper will consider the merits of these two different aspects, scientific rigor and sports engagement, and offer a reflective look at how this may provide a "modus operandi" for safety engineers at any level whether at their desks (creating or reviewing safety assessments) or in a

  4. Differential rigor development in red and white muscle revealed by simultaneous measurement of tension and stiffness.

    Science.gov (United States)

    Kobayashi, Masahiko; Takemori, Shigeru; Yamaguchi, Maki

    2004-02-10

    Based on the molecular mechanism of rigor mortis, we have proposed that stiffness (elastic modulus evaluated with tension response against minute length perturbations) can be a suitable index of post-mortem rigidity in skeletal muscle. To trace the developmental process of rigor mortis, we measured stiffness and tension in both red and white rat skeletal muscle kept in liquid paraffin at 37 and 25 degrees C. White muscle (in which type IIB fibres predominate) developed stiffness and tension significantly more slowly than red muscle, except for soleus red muscle at 25 degrees C, which showed disproportionately slow rigor development. In each of the examined muscles, stiffness and tension developed more slowly at 25 degrees C than at 37 degrees C. In each specimen, tension always reached its maximum level earlier than stiffness, and then decreased more rapidly and markedly than stiffness. These phenomena may account for the sequential progress of rigor mortis in human cadavers.

  5. Evaluation of physical dimension changes as nondestructive measurements for monitoring rigor mortis development in broiler muscles.

    Science.gov (United States)

    Cavitt, L C; Sams, A R

    2003-07-01

    Studies were conducted to develop a non-destructive method for monitoring the rate of rigor mortis development in poultry and to evaluate the effectiveness of electrical stimulation (ES). In the first study, 36 male broilers in each of two trials were processed at 7 wk of age. After being bled, half of the birds received electrical stimulation (400 to 450 V, 400 to 450 mA, for seven pulses of 2 s on and 1 s off), and the other half were designated as controls. At 0.25 and 1.5 h postmortem (PM), carcasses were evaluated for the angles of the shoulder, elbow, and wing tip and the distance between the elbows. Breast fillets were harvested at 1.5 h PM (after chilling) from all carcasses. Fillet samples were excised and frozen for later measurement of pH and R-value, and the remainder of each fillet was held on ice until 24 h postmortem. Shear value and pH means were significantly lower, but R-value means were higher (P rigor mortis by ES. The physical dimensions of the shoulder and elbow changed (P rigor mortis development and with ES. These results indicate that physical measurements of the wings maybe useful as a nondestructive indicator of rigor development and for monitoring the effectiveness of ES. In the second study, 60 male broilers in each of two trials were processed at 7 wk of age. At 0.25, 1.5, 3.0, and 6.0 h PM, carcasses were evaluated for the distance between the elbows. At each time point, breast fillets were harvested from each carcass. Fillet samples were excised and frozen for later measurement of pH and sacromere length, whereas the remainder of each fillet was held on ice until 24 h PM. Shear value and pH means (P rigor mortis development. Elbow distance decreased (P rigor development and was correlated (P rigor mortis development in broiler carcasses.

  6. A Framework for Rigorously Identifying Research Gaps in Qualitative Literature Reviews

    DEFF Research Database (Denmark)

    Müller-Bloch, Christoph; Kranz, Johann

    2015-01-01

    Identifying research gaps is a fundamental goal of literature reviewing. While it is widely acknowledged that literature reviews should identify research gaps, there are no methodological guidelines for how to identify research gaps in qualitative literature reviews ensuring rigor and replicability....... Our study addresses this gap and proposes a framework that should help scholars in this endeavor without stifling creativity. To develop the framework we thoroughly analyze the state-of-the-art procedure of identifying research gaps in 40 recent literature reviews using a grounded theory approach....... Based on the data, we subsequently derive a framework for identifying research gaps in qualitative literature reviews and demonstrate its application with an example. Our results provide a modus operandi for identifying research gaps, thus enabling scholars to conduct literature reviews more rigorously...

  7. Rigorous Analysis of a Randomised Number Field Sieve

    OpenAIRE

    Lee, Jonathan; Venkatesan, Ramarathnam

    2018-01-01

    Factorisation of integers $n$ is of number theoretic and cryptographic significance. The Number Field Sieve (NFS) introduced circa 1990, is still the state of the art algorithm, but no rigorous proof that it halts or generates relationships is known. We propose and analyse an explicitly randomised variant. For each $n$, we show that these randomised variants of the NFS and Coppersmith's multiple polynomial sieve find congruences of squares in expected times matching the best-known heuristic e...

  8. pd Scattering Using a Rigorous Coulomb Treatment: Reliability of the Renormalization Method for Screened-Coulomb Potentials

    International Nuclear Information System (INIS)

    Hiratsuka, Y.; Oryu, S.; Gojuki, S.

    2011-01-01

    Reliability of the screened Coulomb renormalization method, which was proposed in an elegant way by Alt-Sandhas-Zankel-Ziegelmann (ASZZ), is discussed on the basis of 'two-potential theory' for the three-body AGS equations with the Coulomb potential. In order to obtain ASZZ's formula, we define the on-shell Moller function, and calculate it by using the Haeringen criterion, i. e. 'the half-shell Coulomb amplitude is zero'. By these two steps, we can finally obtain the ASZZ formula for a small Coulomb phase shift. Furthermore, the reliability of the Haeringen criterion is thoroughly checked by a numerically rigorous calculation for the Coulomb LS-type equation. We find that the Haeringen criterion can be satisfied only in the higher energy region. We conclude that the ASZZ method can be verified in the case that the on-shell approximation to the Moller function is reasonable, and the Haeringen criterion is reliable. (author)

  9. Mechanical interaction between historical brick and repair mortar: experimental and numerical tests

    International Nuclear Information System (INIS)

    Bocca, P; Grazzini, A; Masera, D; Alberto, A; Valente, S

    2011-01-01

    An innovative laboratory procedure, developed at the Non Destructive Testing Laboratory of the Politecnico di Torino, as a preliminary design stage for the pre-qualification of repair mortars applied to historical masonry buildings is described. Tested repair mortars are suitable for new dehumidified plaster in order to stop the rising damp effects by capillary action on historical masonry walls. Long-term plaster delamination occurs frequently as a consequence of not compatible mechanical characteristics of mortar. Preventing this phenomenon is the main way to increase the durability of repair work. In this direction, it is useful to analyse, through the cohesive crack model, the evolutionary phenomenon of plaster delamination. The parameters used in the numerical simulation of experimental tests are able to characterize the mechanical behaviour of the interface. It is therefore possible to predict delamination in problems with different boundary conditions.

  10. Study designs for identifying risk compensation behavior among users of biomedical HIV prevention technologies: balancing methodological rigor and research ethics.

    Science.gov (United States)

    Underhill, Kristen

    2013-10-01

    The growing evidence base for biomedical HIV prevention interventions - such as oral pre-exposure prophylaxis, microbicides, male circumcision, treatment as prevention, and eventually prevention vaccines - has given rise to concerns about the ways in which users of these biomedical products may adjust their HIV risk behaviors based on the perception that they are prevented from infection. Known as risk compensation, this behavioral adjustment draws on the theory of "risk homeostasis," which has previously been applied to phenomena as diverse as Lyme disease vaccination, insurance mandates, and automobile safety. Little rigorous evidence exists to answer risk compensation concerns in the biomedical HIV prevention literature, in part because the field has not systematically evaluated the study designs available for testing these behaviors. The goals of this Commentary are to explain the origins of risk compensation behavior in risk homeostasis theory, to reframe risk compensation as a testable response to the perception of reduced risk, and to assess the methodological rigor and ethical justification of study designs aiming to isolate risk compensation responses. Although the most rigorous methodological designs for assessing risk compensation behavior may be unavailable due to ethical flaws, several strategies can help investigators identify potential risk compensation behavior during Phase II, Phase III, and Phase IV testing of new technologies. Where concerns arise regarding risk compensation behavior, empirical evidence about the incidence, types, and extent of these behavioral changes can illuminate opportunities to better support the users of new HIV prevention strategies. This Commentary concludes by suggesting a new way to conceptualize risk compensation behavior in the HIV prevention context. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    Science.gov (United States)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting

  12. Experimental and Numerical Evaluation of Rock Dynamic Test with Split-Hopkinson Pressure Bar

    Directory of Open Access Journals (Sweden)

    Kang Peng

    2017-01-01

    Full Text Available Feasibility of rock dynamic properties by split-Hopkinson pressure bar (SHPB was experimentally and numerically evaluated with ANSYS/LS-DYNA. The effects of different diameters, different loading rates, and different propagation distances on wave dispersion of input bars in SHPB with rectangle and half-sine wave loadings were analyzed. The results show that the dispersion effect on the diameter of input bar, loading rate, and propagation distance under half-sine waveform loading is ignorable compared with the rectangle wave loading. Moreover, the degrees of stress uniformity under rectangle and half-sine input wave loadings are compared in SHPB tests, and the time required for stress uniformity is calculated under different above-mentioned loadings. It is confirmed that the stress uniformity can be realized more easily using the half-sine pulse loading compared to the rectangle pulse loading, and this has significant advantages in the dynamic test of rock-like materials. Finally, the Holmquist-Johnson-Concrete constitutive model is introduced to simulate the failure mechanism and failure and fragmentation characteristics of rock under different strain rates. And the numerical results agree with that obtained from the experiment, which confirms the effectiveness of the model and the method.

  13. A rigorous pole representation of multilevel cross sections and its practical applications

    International Nuclear Information System (INIS)

    Hwang, R.N.

    1987-01-01

    In this article a rigorous method for representing the multilevel cross sections and its practical applications are described. It is a generalization of the rationale suggested by de Saussure and Perez for the s-wave resonances. A computer code WHOPPER has been developed to convert the Reich-Moore parameters into the pole and residue parameters in momentum space. Sample calculations have been carried out to illustrate that the proposed method preserves the rigor of the Reich-Moore cross sections exactly. An analytical method has been developed to evaluate the pertinent Doppler-broadened line shape functions. A discussion is presented on how to minimize the number of pole parameters so that the existing reactor codes can be best utilized

  14. Developing a Numerical Ability Test for Students of Education in Jordan: An Application of Item Response Theory

    Science.gov (United States)

    Abed, Eman Rasmi; Al-Absi, Mohammad Mustafa; Abu shindi, Yousef Abdelqader

    2016-01-01

    The purpose of the present study is developing a test to measure the numerical ability for students of education. The sample of the study consisted of (504) students from 8 universities in Jordan. The final draft of the test contains 45 items distributed among 5 dimensions. The results revealed that acceptable psychometric properties of the test;…

  15. Numerical Approach for Goaf-Side Entry Layout and Yield Pillar Design in Fractured Ground Conditions

    Science.gov (United States)

    Jiang, Lishuai; Zhang, Peipeng; Chen, Lianjun; Hao, Zhen; Sainoki, Atsushi; Mitri, Hani S.; Wang, Qingbiao

    2017-11-01

    Entry driven along goaf-side (EDG), which is the development of an entry of the next longwall panel along the goaf-side and the isolation of the entry from the goaf with a small-width yield pillar, has been widely employed in China over the past several decades . The width of such a yield pillar has a crucial effect on EDG layout in terms of the ground control, isolation effect and resource recovery rate. Based on a case study, this paper presents an approach for evaluating, designing and optimizing EDG and yield pillar by considering the results from numerical simulations and field practice. To rigorously analyze the ground stability, the numerical study begins with the simulation of goaf-side stress and ground conditions. Four global models with identical conditions, except for the width of the yield pillar, are built, and the effect of pillar width on ground stability is investigated by comparing aspects of stress distribution, failure propagation, and displacement evolution during the entire service life of the entry. Based on simulation results, the isolation effect of the pillar acquired from field practice is also considered. The suggested optimal yield pillar design is validated using a field test in the same mine. Thus, the presented numerical approach provides references and can be utilized for the evaluation, design and optimization of EDG and yield pillars under similar geological and geotechnical circumstances.

  16. Effect of pre-rigor stretch and various constant temperatures on the rate of post-mortem pH fall, rigor mortis and some quality traits of excised porcine biceps femoris muscle strips.

    Science.gov (United States)

    Vada-Kovács, M

    1996-01-01

    Porcine biceps femoris strips of 10 cm original length were stretched by 50% and fixed within 1 hr post mortem then subjected to temperatures of 4 °, 15 ° or 36 °C until they attained their ultimate pH. Unrestrained control muscle strips, which were left to shorten freely, were similarly treated. Post-mortem metabolism (pH, R-value) and shortening were recorded; thereafter ultimate meat quality traits (pH, lightness, extraction and swelling of myofibrils) were determined. The rate of pH fall at 36 °C, as well as ATP breakdown at 36 and 4 °C, were significantly reduced by pre-rigor stretch. The relationship between R-value and pH indicated cold shortening at 4 °C. Myofibrils isolated from pre-rigor stretched muscle strips kept at 36 °C showed the most severe reduction of hydration capacity, while paleness remained below extreme values. However, pre-rigor stretched myofibrils - when stored at 4 °C - proved to be superior to shortened ones in their extractability and swelling.

  17. Experimental evaluation of rigor mortis IX. The influence of the breaking (mechanical solution) on the development of rigor mortis.

    Science.gov (United States)

    Krompecher, Thomas; Gilles, André; Brandt-Casadevall, Conception; Mangin, Patrice

    2008-04-07

    Objective measurements were carried out to study the possible re-establishment of rigor mortis on rats after "breaking" (mechanical solution). Our experiments showed that: *Cadaveric rigidity can re-establish after breaking. *A significant rigidity can reappear if the breaking occurs before the process is complete. *Rigidity will be considerably weaker after the breaking. *The time course of the intensity does not change in comparison to the controls: --the re-establishment begins immediately after the breaking; --maximal values are reached at the same time as in the controls; --the course of the resolution is the same as in the controls.

  18. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  19. Experimental and Numerical Simulation of Unbalance Response in Vertical Test Rig with Tilting-Pad Bearings

    Directory of Open Access Journals (Sweden)

    Mattias Nässelqvist

    2014-01-01

    Full Text Available In vertically oriented machines with journal bearing, there are no predefined static radial loads, such as dead weight for horizontal rotor. Most of the commercial software is designed to calculate rotordynamic and bearing properties based on machines with a horizontally oriented rotor; that is, the bearing properties are calculated at a static eccentricity. For tilting-pad bearings, there are no existing analytical expressions for bearing parameters and the bearing parameters are dependent on eccentricity and load angle. The objective of this paper is to present a simplified method to perform numerical simulations on vertical rotors including bearing parameters. Instead of recalculating the bearing parameters in each time step polynomials are used to represent the bearing parameters for present eccentricities and load angles. Numerical results are compared with results from tests performed in a test rig. The test rig consists of two guide bearings and a midspan rotor. The guide bearings are 4-pad tilting-pad bearings. Shaft displacement and strains in the bearing bracket are measured to determine the test rig’s properties. The comparison between measurements and simulated results shows small deviations in absolute displacement and load levels, which can be expected due to difficulties in calculating exact bearing parameters.

  20. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2014-01-01

    Full Text Available This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV, permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I wellbore storage section, (II intermediate flow section (transient section, (III mid-radial flow section, (IV crossflow section (from low permeability layer to high permeability layer, and (V systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR.

  1. Effects of post mortem temperature on rigor tension, shortening and ...

    African Journals Online (AJOL)

    Fully developed rigor mortis in muscle is characterised by maximum loss of extensibility. The course of post mortem changes in ostrich muscle was studied by following isometric tension, shortening and change in pH during the first 24 h post mortem within muscle strips from the muscularis gastrocnemius, pars interna at ...

  2. Numerical modeling of the Near Surface Test Facility No. 1 and No. 2 heater tests

    International Nuclear Information System (INIS)

    Hocking, G.; Williams, J.; Boonlualohr, P.; Mathews, I.; Mustoe, G.

    1981-01-01

    Thermomechanical predictive calculations have been undertaken for two full scale heater tests No. 1 and No. 2 at the Near Surface Test Facility (NSTF) at Hanford, Washington. Numerical predictions were made of the basaltic rock response involving temperatures, displacements, strains and stresses due to energizing the electrical heaters. The basalt rock mass was modeled as an isotropic thermal material but with temperature dependent thermal conductivity, specific heat and thermal expansion. The fractured nature of the basalt necessitated that it be modeled as a cross anisotropic medium with a bi-linear locking stress strain relationship. The cross-anisotropic idealization was selected after characterization studies indicated that a vertical columnar structure persisted throughout the test area and no major throughgoing discontinuities were present. The deformational properties were determined from fracture frequency and orientation, joint deformational data, Goodman Jack results and two rock mass classification schemes. Similar deformational moduli were determined from these techniques, except for the Goodman Jack results. The finite element technique was utilized for both the non-linear thermal and mechanical computations. An incremental stiffness method with residual force correction was employed to solve the non-linear problem by piecewise linearization. Two and three dimensional thermomechanical scoping calculations were made to assess the significance of various parameters and associated errors with geometrical idealizations. Both heater tests were modeled as two dimensional axisymmetric geometry with water assumed to be absent. Instrument response was predicted for all of the thermocouples, extensometers, USBM borehole deformation and IRAD gages for the entire duration of both tests

  3. Pre-rigor temperature and the relationship between lamb tenderisation, free water production, bound water and dry matter.

    Science.gov (United States)

    Devine, Carrick; Wells, Robyn; Lowe, Tim; Waller, John

    2014-01-01

    The M. longissimus from lambs electrically stimulated at 15 min post-mortem were removed after grading, wrapped in polythene film and held at 4 (n=6), 7 (n=6), 15 (n=6, n=8) and 35°C (n=6), until rigor mortis then aged at 15°C for 0, 4, 24 and 72 h post-rigor. Centrifuged free water increased exponentially, and bound water, dry matter and shear force decreased exponentially over time. Decreases in shear force and increases in free water were closely related (r(2)=0.52) and were unaffected by pre-rigor temperatures. © 2013.

  4. Numerical modelling as a cost-reduction tool for probability of detection of bolt hole eddy current testing

    Science.gov (United States)

    Mandache, C.; Khan, M.; Fahr, A.; Yanishevsky, M.

    2011-03-01

    Probability of detection (PoD) studies are broadly used to determine the reliability of specific nondestructive inspection procedures, as well as to provide data for damage tolerance life estimations and calculation of inspection intervals for critical components. They require inspections on a large set of samples, a fact that makes these statistical assessments time- and cost-consuming. Physics-based numerical simulations of nondestructive testing inspections could be used as a cost-effective alternative to empirical investigations. They realistically predict the inspection outputs as functions of the input characteristics related to the test piece, transducer and instrument settings, which are subsequently used to partially substitute and/or complement inspection data in PoD analysis. This work focuses on the numerical modelling aspects of eddy current testing for the bolt hole inspections of wing box structures typical of the Lockheed Martin C-130 Hercules and P-3 Orion aircraft, found in the air force inventory of many countries. Boundary element-based numerical modelling software was employed to predict the eddy current signal responses when varying inspection parameters related to probe characteristics, crack geometry and test piece properties. Two demonstrator exercises were used for eddy current signal prediction when lowering the driver probe frequency and changing the material's electrical conductivity, followed by subsequent discussions and examination of the implications on using simulated data in the PoD analysis. Despite some simplifying assumptions, the modelled eddy current signals were found to provide similar results to the actual inspections. It is concluded that physics-based numerical simulations have the potential to partially substitute or complement inspection data required for PoD studies, reducing the cost, time, effort and resources necessary for a full empirical PoD assessment.

  5. Double phosphorylation of the myosin regulatory light chain during rigor mortis of bovine Longissimus muscle.

    Science.gov (United States)

    Muroya, Susumu; Ohnishi-Kameyama, Mayumi; Oe, Mika; Nakajima, Ikuyo; Shibata, Masahiro; Chikuni, Koichi

    2007-05-16

    To investigate changes in myosin light chains (MyLCs) during postmortem aging of the bovine longissimus muscle, we performed two-dimensional gel electrophoresis followed by identification with matrix-assisted laser desorption ionization time-of-flight mass spectrometry. The results of fluorescent differential gel electrophoresis showed that two spots of the myosin regulatory light chain (MyLC2) at pI values of 4.6 and 4.7 shifted toward those at pI values of 4.5 and 4.6, respectively, by 24 h postmortem when rigor mortis was completed. Meanwhile, the MyLC1 and MyLC3 spots did not change during the 14 days postmortem. Phosphoprotein-specific staining of the gels demonstrated that the MyLC2 proteins at pI values of 4.5 and 4.6 were phosphorylated. Furthermore, possible N-terminal region peptides containing one and two phosphoserine residues were detected in each mass spectrum of the MyLC2 spots at pI values of 4.5 and 4.6, respectively. These results demonstrated that MyLC2 became doubly phosphorylated during rigor formation of the bovine longissimus, suggesting involvement of the MyLC2 phosphorylation in the progress of beef rigor mortis. Bovine; myosin regulatory light chain (RLC, MyLC2); phosphorylation; rigor mortis; skeletal muscle.

  6. A Rigorous Methodology for Analyzing and Designing Plug-Ins

    DEFF Research Database (Denmark)

    Fasie, Marieta V.; Haxthausen, Anne Elisabeth; Kiniry, Joseph

    2013-01-01

    . This paper addresses these problems by describing a rigorous methodology for analyzing and designing plug-ins. The methodology is grounded in the Extended Business Object Notation (EBON) and covers informal analysis and design of features, GUI, actions, and scenarios, formal architecture design, including...... behavioral semantics, and validation. The methodology is illustrated via a case study whose focus is an Eclipse environment for the RAISE formal method's tool suite....

  7. Representation of Numerical and Non-Numerical Order in Children

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2012-01-01

    The representation of numerical and non-numerical ordered sequences was investigated in children from preschool to grade 3. The child's conception of how sequence items map onto a spatial scale was tested using the Number-to-Position task (Siegler & Opfer, 2003) and new variants of the task designed to probe the representation of the alphabet…

  8. Lateral control required for satisfactory flying qualities based on flight tests of numerous airplanes

    Science.gov (United States)

    Gilruth, R R; Turner, W N

    1941-01-01

    Report presents the results of an analysis made of the aileron control characteristics of numerous airplanes tested in flight by the National Advisory Committee for Aeronautics. By the use of previously developed theory, the observed values of pb/2v for the various wing-aileron arrangements were examined to determine the effective section characteristics of the various aileron types.

  9. Analysis of pumping tests of partially penetrating wells in an unconfined aquifer using inverse numerical optimization

    Science.gov (United States)

    Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.

    1999-08-01

    Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.

  10. The Influence of Pre-University Students' Mathematics Test Anxiety and Numerical Anxiety on Mathematics Achievement

    Science.gov (United States)

    Seng, Ernest Lim Kok

    2015-01-01

    This study examines the relationship between mathematics test anxiety and numerical anxiety on students' mathematics achievement. 140 pre-university students who studied at one of the institutes of higher learning were being investigated. Gender issue pertaining to mathematics anxieties was being addressed besides investigating the magnitude of…

  11. The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.

    Science.gov (United States)

    Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko

    2013-11-01

    Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method

    Directory of Open Access Journals (Sweden)

    Ghanmi Helmi

    2018-05-01

    Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.

  13. A rigorous treatment of uncertainty quantification for Silicon damage metrics

    International Nuclear Information System (INIS)

    Griffin, P.

    2016-01-01

    These report summaries the contributions made by Sandia National Laboratories in support of the International Atomic Energy Agency (IAEA) Nuclear Data Section (NDS) Technical Meeting (TM) on Nuclear Reaction Data and Uncertainties for Radiation Damage. This work focused on a rigorous treatment of the uncertainties affecting the characterization of the displacement damage seen in silicon semiconductors. (author)

  14. Paper 3: Content and Rigor of Algebra Credit Recovery Courses

    Science.gov (United States)

    Walters, Kirk; Stachel, Suzanne

    2014-01-01

    This paper describes the content, organization and rigor of the f2f and online summer algebra courses that were delivered in summers 2011 and 2012. Examining the content of both types of courses is important because research suggests that algebra courses with certain features may be better than others in promoting success for struggling students.…

  15. Numerical examination of temperature control in helium-cooled high flux test module of IFMIF

    International Nuclear Information System (INIS)

    Ebara, Shinji; Yokomine, Takehiko; Shimizu, Akihiko

    2007-01-01

    For long term irradiation of the International Fusion Materials Irradiation Facility (IFMIF), test specimens are needed to retain constant temperature to avoid change of its irradiation characteristics. The constant temperatures control is one of the most challenging issues for the IFMIF test facilities. We have proposed a new concept of test module which is capable of precisely measuring temperature, keeping uniform temperature with enhanced cooling performance. In the system according to the new design, cooling performances and temperature distributions of specimens were examined numerically under diverse conditions. Some transient behaviors corresponding to the prescribed temperature control mode were perseveringly simulated. It was confirmed that the thermal characteristics of the new design satisfied the severe requirement of IFMIF

  16. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    Science.gov (United States)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  17. CRIEPI and SKB cooperation report. No.11. Numerical analysis for long-term pumping test at crystalline terrain

    International Nuclear Information System (INIS)

    Tanaka, Yasuharu

    2012-01-01

    A numerical analysis code for groundwater flow in rock mass, FEGM, which was developed by CRIEPI, was applied to the analysis of a long-term pumping test conducted at Olkiluoto Island in Finland. The groundwater level, the groundwater pressure and the groundwater inflow into the observation boreholes measured under the natural condition or during the pumping test could be reproduced in numerical simulations with FEGM. From this, the effectiveness of the numerical code was confirmed. And it was found that the site-scale groundwater flow is dependent on large-scale fracture zones in crystalline terrane like Olkiluoto Island. In this study, the calculated values of the groundwater inflow into the observation boreholes as well as the groundwater level and the groundwater pressure were compared to the measured ones. As a result, the reliability of the analytical model was improved. In addition, the travel times of groundwater particles from different depths at the same point to the model boundaries were compared. And the advantage of constructing waste disposal facilities at deep underground was confirmed from the viewpoint of the travel time of groundwater. (author)

  18. Re-establishment of rigor mortis: evidence for a considerably longer post-mortem time span.

    Science.gov (United States)

    Crostack, Chiara; Sehner, Susanne; Raupach, Tobias; Anders, Sven

    2017-07-01

    Re-establishment of rigor mortis following mechanical loosening is used as part of the complex method for the forensic estimation of the time since death in human bodies and has formerly been reported to occur up to 8-12 h post-mortem (hpm). We recently described our observation of the phenomenon in up to 19 hpm in cases with in-hospital death. Due to the case selection (preceding illness, immobilisation), transfer of these results to forensic cases might be limited. We therefore examined 67 out-of-hospital cases of sudden death with known time points of death. Re-establishment of rigor mortis was positive in 52.2% of cases and was observed up to 20 hpm. In contrast to the current doctrine that a recurrence of rigor mortis is always of a lesser degree than its first manifestation in a given patient, muscular rigidity at re-establishment equalled or even exceeded the degree observed before dissolving in 21 joints. Furthermore, this is the first study to describe that the phenomenon appears to be independent of body or ambient temperature.

  19. Advances in variational and hemivariational inequalities theory, numerical analysis, and applications

    CERN Document Server

    Migórski, Stanisław; Sofonea, Mircea

    2015-01-01

    Highlighting recent advances in variational and hemivariational inequalities with an emphasis on theory, numerical analysis and applications, this volume serves as an indispensable resource to graduate students and researchers interested in the latest results from recognized scholars in this relatively young and rapidly-growing field. Particularly, readers will find that the volume’s results and analysis present valuable insights into the fields of pure and applied mathematics, as well as civil, aeronautical, and mechanical engineering. Researchers and students will find new results on well posedness to stationary and evolutionary inequalities and their rigorous proofs. In addition to results on modeling and abstract problems, the book contains new results on the numerical methods for variational and hemivariational inequalities. Finally, the applications presented illustrate the use of these results in the study of miscellaneous mathematical models which describe the contact between deformable bodies and a...

  20. Rigorous upper bounds for transport due to passive advection by inhomogeneous turbulence

    International Nuclear Information System (INIS)

    Krommes, J.A.; Smith, R.A.

    1987-05-01

    A variational procedure, due originally to Howard and explored by Busse and others for self-consistent turbulence problems, is employed to determine rigorous upper bounds for the advection of a passive scalar through an inhomogeneous turbulent slab with arbitrary generalized Reynolds number R and Kubo number K. In the basic version of the method, the steady-state energy balance is used as a constraint; the resulting bound, though rigorous, is independent of K. A pedagogical reference model (one dimension, K = ∞) is described in detail; the bound compares favorably with the exact solution. The direct-interaction approximation is also worked out for this model; it is somewhat more accurate than the bound, but requires considerably more labor to solve. For the basic bound, a general formalism is presented for several dimensions, finite correlation length, and reasonably general boundary conditions. Part of the general method, in which a Green's function technique is employed, applies to self-consistent as well as to passive problems, and thereby generalizes previous results in the fluid literature. The formalism is extended for the first time to include time-dependent constraints, and a bound is deduced which explicitly depends on K and has the correct physical scalings in all regimes of R and K. Two applications from the theory of turbulent plasmas ae described: flux in velocity space, and test particle transport in stochastic magnetic fields. For the velocity space problem the simplest bound reproduces Dupree's original scaling for the strong turbulence diffusion coefficient. For the case of stochastic magnetic fields, the scaling of the bounds is described for the magnetic diffusion coefficient as well as for the particle diffusion coefficient in the so-called collisionless, fluid, and double-streaming regimes

  1. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    Science.gov (United States)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by

  2. A Draft Conceptual Framework of Relevant Theories to Inform Future Rigorous Research on Student Service-Learning Outcomes

    Science.gov (United States)

    Whitley, Meredith A.

    2014-01-01

    While the quality and quantity of research on service-learning has increased considerably over the past 20 years, researchers as well as governmental and funding agencies have called for more rigor in service-learning research. One key variable in improving rigor is using relevant existing theories to improve the research. The purpose of this…

  3. Building an Evidence Base to Inform Interventions for Pregnant and Parenting Adolescents: A Call for Rigorous Evaluation

    Science.gov (United States)

    Burrus, Barri B.; Scott, Alicia Richmond

    2012-01-01

    Adolescent parents and their children are at increased risk for adverse short- and long-term health and social outcomes. Effective interventions are needed to support these young families. We studied the evidence base and found a dearth of rigorously evaluated programs. Strategies from successful interventions are needed to inform both intervention design and policies affecting these adolescents. The lack of rigorous evaluations may be attributable to inadequate emphasis on and sufficient funding for evaluation, as well as to challenges encountered by program evaluators working with this population. More rigorous program evaluations are urgently needed to provide scientifically sound guidance for programming and policy decisions. Evaluation lessons learned have implications for other vulnerable populations. PMID:22897541

  4. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  5. Present status on numerical algorithms and benchmark tests for point kinetics and quasi-static approximate kinetics

    International Nuclear Information System (INIS)

    Ise, Takeharu

    1976-12-01

    Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)

  6. Numerical analysis of the thermal and fluid flow phenomena of the fluidity test

    Directory of Open Access Journals (Sweden)

    L. Sowa

    2010-01-01

    Full Text Available In the paper, two mathematical and numerical models of the metals alloy solidification in the cylindrical channel of fluidity test, which take into account the process of filling the mould cavity with molten metal, has been proposed. Velocity and pressure fields were obtained by solving the momentum equations and the continuity equation, while the thermal fields were obtained by solving the heat conduction equation containing the convection term. Next, the numerical analysis of the solidification process of metals alloy in the cylindrical mould channel has been made. In the models one takes into account interdependence of the thermal and dynamical phenomena. Coupling of the heat transfer and fluid flow phenomena has been taken into consideration by the changes of the fluidity function and thermophysical parameters of alloy with respect to the temperature. The influence of the velocity or the pressure and the temperature of metal pouring on the solid phase growth kinetics were estimated. The problem has been solved by the finite element method.

  7. Numerical model of the nanoindentation test based on the digital material representation of the Ti/TiN multilayers

    Directory of Open Access Journals (Sweden)

    Perzyński Konrad

    2015-06-01

    Full Text Available The developed numerical model of a local nanoindentation test, based on the digital material representation (DMR concept, has been presented within the paper. First, an efficient algorithm describing the pulsed laser deposition (PLD process was proposed to realistically recreate the specific morphology of a nanolayered material in an explicit manner. The nanolayered Ti/TiN composite was selected for the investigation. Details of the developed cellular automata model of the PLD process were presented and discussed. Then, the Ti/TiN DMR was incorporated into the finite element software and numerical model of the nanoindentation test was established. Finally, examples of obtained results presenting capabilities of the proposed approach were highlighted.

  8. Feedback for relatedness and competence : Can feedback in blended learning contribute to optimal rigor, basic needs, and motivation?

    NARCIS (Netherlands)

    Bombaerts, G.; Nickel, P.J.

    2017-01-01

    We inquire how peer and tutor feedback influences students' optimal rigor, basic needs and motivation. We analyze questionnaires from two courses in two subsequent years. We conclude that feedback in blended learning can contribute to rigor and basic needs, but it is not clear from our data what

  9. Three dimensional numeric quench simulation of Super-FRS dipole test coil for FAIR project

    International Nuclear Information System (INIS)

    Wu Wei; Ma Lizhen; He Yuan; Yuan Ping

    2013-01-01

    The prototype of superferric dipoles for Super-FRS of Facility for Antiprotons and Ion Research (FAIR) project was designed, fabricated, and tested in China. To investigate the performance of the superconducting coil, a so-called test coil was fabricated and tested in advance. A 3D model based on ANSYS and OPERA 3D was developed in parallel, not only to check if the design matches the numerical simulation, but also to study more details of quench phenomena. The model simplifies the epoxy impregnated coil into an anisotropic continuum medium. The simulation combines ANSYS solver routines for nonlinear transient thermal analysis, the OPERA 3D for magnetic field evaluation and the ANSYS script language for calculations of Joule heat and differential equations of the protection circuits. The time changes of temperature, voltage and current decay, and quench propagation during quench process were analyzed and illustrated. Finally, the test results of the test coil were demonstrated and compared with the results of simulation. (authors)

  10. Reciprocity relations in transmission electron microscopy: A rigorous derivation.

    Science.gov (United States)

    Krause, Florian F; Rosenauer, Andreas

    2017-01-01

    A concise derivation of the principle of reciprocity applied to realistic transmission electron microscopy setups is presented making use of the multislice formalism. The equivalence of images acquired in conventional and scanning mode is thereby rigorously shown. The conditions for the applicability of the found reciprocity relations is discussed. Furthermore the positions of apertures in relation to the corresponding lenses are considered, a subject which scarcely has been addressed in previous publications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Statistics for mathematicians a rigorous first course

    CERN Document Server

    Panaretos, Victor M

    2016-01-01

    This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.

  12. Atlantic salmon skin and fillet color changes effected by perimortem handling stress, rigor mortis, and ice storage.

    Science.gov (United States)

    Erikson, U; Misimi, E

    2008-03-01

    The changes in skin and fillet color of anesthetized and exhausted Atlantic salmon were determined immediately after killing, during rigor mortis, and after ice storage for 7 d. Skin color (CIE L*, a*, b*, and related values) was determined by a Minolta Chroma Meter. Roche SalmoFan Lineal and Roche Color Card values were determined by a computer vision method and a sensory panel. Before color assessment, the stress levels of the 2 fish groups were characterized in terms of white muscle parameters (pH, rigor mortis, and core temperature). The results showed that perimortem handling stress initially significantly affected several color parameters of skin and fillets. Significant transient fillet color changes also occurred in the prerigor phase and during the development of rigor mortis. Our results suggested that fillet color was affected by postmortem glycolysis (pH drop, particularly in anesthetized fillets), then by onset and development of rigor mortis. The color change patterns during storage were different for the 2 groups of fish. The computer vision method was considered suitable for automated (online) quality control and grading of salmonid fillets according to color.

  13. Numerical Analysis of AHSS Fracture in a Stretch-bending Test

    Science.gov (United States)

    Luo, Meng; Chen, Xiaoming; Shi, Ming F.; Shih, Hua-Chu

    2010-06-01

    Advanced High Strength Steels (AHSS) are increasingly used in the automotive industry due to their superior strength and substantial weight reduction advantage. However, their limited ductility gives rise to numerous manufacturing issues. One of them is the so-called `shear fracture' often observed on tight radii during stamping processes. Since traditional approaches, such as the Forming Limit Diagram (FLD), are unable to predict this type of fracture, efforts have been made to develop failure criteria that can predict shear fractures. In this paper, a recently developed Modified Mohr-Coulomb (MMC) ductile fracture criterion[1] is adopted to analyze the failure behavior of a Dual Phase (DP) steel sheet during stretch bending operations. The plasticity and ductile fracture of the present sheet are fully characterized by the Hill'48 orthotropic model and the MMC fracture model respectively. Finite Element models with three different element types (3D, shell and plane strain) were built for a Stretch Forming Simulator (SFS) test and numerical simulations with four different R/t ratios (die radius normalized by sheet thickness) were performed. It has been shown that the 3D and shell element models can accurately predict the failure location/mode, the upper die load-displacement responses as well as the wall stress and wrap angle at the onset of fracture for all R/t ratios. Furthermore, a series of parametric studies were conducted on the 3D element model, and the effects of tension level (clamping distance) and tooling friction on the failure modes/locations were investigated.

  14. Biomedical text mining for research rigor and integrity: tasks, challenges, directions.

    Science.gov (United States)

    Kilicoglu, Halil

    2017-06-13

    An estimated quarter of a trillion US dollars is invested in the biomedical research enterprise annually. There is growing alarm that a significant portion of this investment is wasted because of problems in reproducibility of research findings and in the rigor and integrity of research conduct and reporting. Recent years have seen a flurry of activities focusing on standardization and guideline development to enhance the reproducibility and rigor of biomedical research. Research activity is primarily communicated via textual artifacts, ranging from grant applications to journal publications. These artifacts can be both the source and the manifestation of practices leading to research waste. For example, an article may describe a poorly designed experiment, or the authors may reach conclusions not supported by the evidence presented. In this article, we pose the question of whether biomedical text mining techniques can assist the stakeholders in the biomedical research enterprise in doing their part toward enhancing research integrity and rigor. In particular, we identify four key areas in which text mining techniques can make a significant contribution: plagiarism/fraud detection, ensuring adherence to reporting guidelines, managing information overload and accurate citation/enhanced bibliometrics. We review the existing methods and tools for specific tasks, if they exist, or discuss relevant research that can provide guidance for future work. With the exponential increase in biomedical research output and the ability of text mining approaches to perform automatic tasks at large scale, we propose that such approaches can support tools that promote responsible research practices, providing significant benefits for the biomedical research enterprise. Published by Oxford University Press 2017. This work is written by a US Government employee and is in the public domain in the US.

  15. Numerical simulation of impact bend tests on araldite B and steel specimens

    International Nuclear Information System (INIS)

    Stoeckl, H.; Boehme, W.

    1983-09-01

    As a preliminary stage in the numerical simulation of impact bend tests on elastic-plastic sample materials some simpler experiments were calculated for this report, some of which occured without crack propagation, others with linear elastic crack propagation. These calculations were performed with an own program based on the method of finite differences and also with the finite element program ADINA. In the numerical models plane stress was assumed. Crack propagation was governed by a relation between crack velocity and stress intensity factor. As load input the measured hammer load was used in some cases, mass and initial velocity of the hammer in others. The sample looses contact to the anvils and to the hammer for some time, which had to be considered in model building. The stiffening of the model in the contact region caused by the discretization had to be compensated by springs inserted between the sample and the anvils. The simulation reproduces the experimentally observed behaviour of the sample quite well. Furthermore, additional information can be extracted from the experiment, e.g. concerning the partition of the impact energy. (orig.) [de

  16. Numerical simulations of wave propagation in long bars with application to Kolsky bar testing

    Energy Technology Data Exchange (ETDEWEB)

    Corona, Edmundo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-11-01

    Material testing using the Kolsky bar, or split Hopkinson bar, technique has proven instrumental to conduct measurements of material behavior at strain rates in the order of 103 s-1. Test design and data reduction, however, remain empirical endeavors based on the experimentalist's experience. Issues such as wave propagation across discontinuities, the effect of the deformation of the bar surfaces in contact with the specimen, the effect of geometric features in tensile specimens (dog-bone shape), wave dispersion in the bars and other particulars are generally treated using simplified models. The work presented here was conducted in Q3 and Q4 of FY14. The objective was to demonstrate the feasibility of numerical simulations of Kolsky bar tests, which was done successfully.

  17. Rigorous results on measuring the quark charge below color threshold

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1979-01-01

    Rigorous theorems are presented showing that contributions from a color nonsinglet component of the current to matrix elements of a second order electromagnetic transition are suppressed by factors inversely proportional to the energy of the color threshold. Parton models which obtain matrix elements proportional to the color average of the square of the quark charge are shown to neglect terms of the same order of magnitude as terms kept. (author)

  18. Tridimensional numerical modelling of an eddy current non destructive testing process

    International Nuclear Information System (INIS)

    Bonnin, O.; Chavant, C.; Giordano, P.

    1993-01-01

    This paper presents the numerical modelling of a new eddy current inspection process. The originality of the process, developed jointly by IFREMER and the CEA, lies in the mode of inducing the currents in the component to be tested. The TRIFOU eddy current calculation code is used for the modelling, which is in 3D. It is shown that a crack in the component inspected will cause localized disturbance of the currents induced. If we then focus on this disturbance, assuming the electrical behaviour of the materials to be linear, the resulting problem can be set for a limited geometrical area, leading to an appreciable saving in machine time. It is also shown that the computed and experimental results are quantitatively similar. (authors). 2 figs., 6 refs

  19. Rigor mortis development in turkey breast muscle and the effect of electrical stunning.

    Science.gov (United States)

    Alvarado, C Z; Sams, A R

    2000-11-01

    Rigor mortis development in turkey breast muscle and the effect of electrical stunning on this process are not well characterized. Some electrical stunning procedures have been known to inhibit postmortem (PM) biochemical reactions, thereby delaying the onset of rigor mortis in broilers. Therefore, this study was designed to characterize rigor mortis development in stunned and unstunned turkeys. A total of 154 turkey toms in two trials were conventionally processed at 20 to 22 wk of age. Turkeys were either stunned with a pulsed direct current (500 Hz, 50% duty cycle) at 35 mA (40 V) in a saline bath for 12 seconds or left unstunned as controls. At 15 min and 1, 2, 4, 8, 12, and 24 h PM, pectoralis samples were collected to determine pH, R-value, L* value, sarcomere length, and shear value. In Trial 1, the samples obtained for pH, R-value, and sarcomere length were divided into surface and interior samples. There were no significant differences between the surface and interior samples among any parameters measured. Muscle pH significantly decreased over time in stunned and unstunned birds through 2 h PM. The R-values increased to 8 h PM in unstunned birds and 24 h PM in stunned birds. The L* values increased over time, with no significant differences after 1 h PM for the controls and 2 h PM for the stunned birds. Sarcomere length increased through 2 h PM in the controls and 12 h PM in the stunned fillets. Cooked meat shear values decreased through the 1 h PM deboning time in the control fillets and 2 h PM in the stunned fillets. These results suggest that stunning delayed the development of rigor mortis through 2 h PM, but had no significant effect on the measured parameters at later time points, and that deboning turkey breasts at 2 h PM or later will not significantly impair meat tenderness.

  20. Towards standard testbeds for numerical relativity

    International Nuclear Information System (INIS)

    Alcubierre, Miguel; Allen, Gabrielle; Bona, Carles; Fiske, David; Goodale, Tom; Guzman, F Siddhartha; Hawke, Ian; Hawley, Scott H; Husa, Sascha; Koppitz, Michael; Lechner, Christiane; Pollney, Denis; Rideout, David; Salgado, Marcelo; Schnetter, Erik; Seidel, Edward; Shinkai, Hisa-aki; Shoemaker, Deirdre; Szilagyi, Bela; Takahashi, Ryoji; Winicour, Jeff

    2004-01-01

    In recent years, many different numerical evolution schemes for Einstein's equations have been proposed to address stability and accuracy problems that have plagued the numerical relativity community for decades. Some of these approaches have been tested on different spacetimes, and conclusions have been drawn based on these tests. However, differences in results originate from many sources, including not only formulations of the equations, but also gauges, boundary conditions, numerical methods and so on. We propose to build up a suite of standardized testbeds for comparing approaches to the numerical evolution of Einstein's equations that are designed to both probe their strengths and weaknesses and to separate out different effects, and their causes, seen in the results. We discuss general design principles of suitable testbeds, and we present an initial round of simple tests with periodic boundary conditions. This is a pivotal first step towards building a suite of testbeds to serve the numerical relativists and researchers from related fields who wish to assess the capabilities of numerical relativity codes. We present some examples of how these tests can be quite effective in revealing various limitations of different approaches, and illustrating their differences. The tests are presently limited to vacuum spacetimes, can be run on modest computational resources and can be used with many different approaches used in the relativity community

  1. Towards standard testbeds for numerical relativity

    Energy Technology Data Exchange (ETDEWEB)

    Alcubierre, Miguel [Inst. de Ciencias Nucleares, Univ. Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico Distrito Federal 04510 (Mexico); Allen, Gabrielle; Goodale, Tom; Guzman, F Siddhartha; Hawke, Ian; Husa, Sascha; Koppitz, Michael; Lechner, Christiane; Pollney, Denis; Rideout, David [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Institut, 14476 Golm (Germany); Bona, Carles [Departament de Fisica, Universitat de les Illes Balears, Ctra de Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Fiske, David [Dept. of Physics, Univ. of Maryland, College Park, MD 20742-4111 (United States); Hawley, Scott H [Center for Relativity, Univ. of Texas at Austin, Austin, Texas 78712 (United States); Salgado, Marcelo [Inst. de Ciencias Nucleares, Univ. Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico Distrito Federal 04510 (Mexico); Schnetter, Erik [Inst. fuer Astronomie und Astrophysik, Universitaet Tuebingen, 72076 Tuebingen (Germany); Seidel, Edward [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Inst., 14476 Golm (Germany); Shinkai, Hisa-aki [Computational Science Div., Inst. of Physical and Chemical Research (RIKEN), Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Shoemaker, Deirdre [Center for Radiophysics and Space Research, Cornell Univ., Ithaca, NY 14853 (United States); Szilagyi, Bela [Dept. of Physics and Astronomy, Univ. of Pittsburgh, Pittsburgh, PA 15260 (United States); Takahashi, Ryoji [Theoretical Astrophysics Center, Juliane Maries Vej 30, 2100 Copenhagen, (Denmark); Winicour, Jeff [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Institut, 14476 Golm (Germany)

    2004-01-21

    In recent years, many different numerical evolution schemes for Einstein's equations have been proposed to address stability and accuracy problems that have plagued the numerical relativity community for decades. Some of these approaches have been tested on different spacetimes, and conclusions have been drawn based on these tests. However, differences in results originate from many sources, including not only formulations of the equations, but also gauges, boundary conditions, numerical methods and so on. We propose to build up a suite of standardized testbeds for comparing approaches to the numerical evolution of Einstein's equations that are designed to both probe their strengths and weaknesses and to separate out different effects, and their causes, seen in the results. We discuss general design principles of suitable testbeds, and we present an initial round of simple tests with periodic boundary conditions. This is a pivotal first step towards building a suite of testbeds to serve the numerical relativists and researchers from related fields who wish to assess the capabilities of numerical relativity codes. We present some examples of how these tests can be quite effective in revealing various limitations of different approaches, and illustrating their differences. The tests are presently limited to vacuum spacetimes, can be run on modest computational resources and can be used with many different approaches used in the relativity community.

  2. Rigorous Integration of Non-Linear Ordinary Differential Equations in Chebyshev Basis

    Czech Academy of Sciences Publication Activity Database

    Dzetkulič, Tomáš

    2015-01-01

    Roč. 69, č. 1 (2015), s. 183-205 ISSN 1017-1398 R&D Projects: GA MŠk OC10048; GA ČR GD201/09/H057 Institutional research plan: CEZ:AV0Z10300504 Keywords : Initial value problem * Rigorous integration * Taylor model * Chebyshev basis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.366, year: 2015

  3. Rigorous quantum limits on monitoring free masses and harmonic oscillators

    Science.gov (United States)

    Roy, S. M.

    2018-03-01

    There are heuristic arguments proposing that the accuracy of monitoring position of a free mass m is limited by the standard quantum limit (SQL): σ2( X (t ) ) ≥σ2( X (0 ) ) +(t2/m2) σ2( P (0 ) ) ≥ℏ t /m , where σ2( X (t ) ) and σ2( P (t ) ) denote variances of the Heisenberg representation position and momentum operators. Yuen [Phys. Rev. Lett. 51, 719 (1983), 10.1103/PhysRevLett.51.719] discovered that there are contractive states for which this result is incorrect. Here I prove universally valid rigorous quantum limits (RQL), viz. rigorous upper and lower bounds on σ2( X (t ) ) in terms of σ2( X (0 ) ) and σ2( P (0 ) ) , given by Eq. (12) for a free mass and by Eq. (36) for an oscillator. I also obtain the maximally contractive and maximally expanding states which saturate the RQL, and use the contractive states to set up an Ozawa-type measurement theory with accuracies respecting the RQL but beating the standard quantum limit. The contractive states for oscillators improve on the Schrödinger coherent states of constant variance and may be useful for gravitational wave detection and optical communication.

  4. A Neurological Enigma: The Inborn Numerical Competence of Humans and Animals

    Science.gov (United States)

    Gross, Hans J.

    2012-03-01

    "Subitizing" means our ability to recognize and memorize object numbers precisely under conditions where counting is impossible. This is an inborn archaic process which was named after the Latin "subito" = suddenly, immediately, indicating that the objects in question are presented to test persons only for the fraction of a second in order to prevent counting. Sequential counting, however, is an outstanding cultural achievement of mankind and means to count "1, 2, 3, 4, 5, 6, 7, 8 ..." without a limit. In contrast to inborn "subitizing", counting has to be trained, beginning in our early childhood with the help of our fingers. For humans we know since 140 years that we can "subitize" only up to 4 objects correctly and that mistakes occur from 5 objects on. Similar results have been obtained for a number of non-human vertebrates from salamanders to pigeons and dolphins. To our surprise, we have detected this inborn numerical competence for the first time in case of an invertebrate, the honeybee which recognizes and memorizes 3 to 4 objects under rigorous test conditions. This common ability of humans and honeybees to "subitize" up to 4 objects correctly and the miraculous but rare ability of persons with Savant syndrome to "subitize" more than hundred objects precisely raises a number of intriguing questions concerning the evolution and the significance of this biological enigma.

  5. Numerical forecast test on local wind fields at Qinshan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Chen Xiaoqiu

    2005-01-01

    Non-hydrostatic, full compressible atmospheric dynamics model is applied to perform numerical forecast test on local wind fields at Qinshan nuclear power plant, and prognostic data are compared with observed data for wind fields. The results show that the prognostic of wind speeds is better than that of wind directions as compared with observed results. As the whole, the results of prognostic wind field are consistent with meteorological observation data, 54% of wind speeds are within a factor of 1.5, about 61% of the deviation of wind direction within the 1.5 azimuth (≤33.75 degrees) in the first six hours. (authors)

  6. Experimental and Numerical Evaluation of Direct Tension Test for Cylindrical Concrete Specimens

    Directory of Open Access Journals (Sweden)

    Jung J. Kim

    2014-01-01

    Full Text Available Concrete cracking strength can be defined as the tensile strength of concrete subjected to pure tension stress. However, as it is difficult to apply direct tension load to concrete specimens, concrete cracking is usually quantified by the modulus of rupture for flexural members. In this study, a new direct tension test setup for cylindrical specimens (101.6 mm in diameter and 203.2 mm in height similar to those used in compression test is developed. Double steel plates are used to obtain uniform stress distributions. Finite element analysis for the proposed test setup is conducted. The uniformity of the stress distribution along the cylindrical specimen is examined and compared with rectangular cross section. Fuzzy image pattern recognition method is used to assess stress uniformity along the specimen. Moreover, the probability of cracking at different locations along the specimen is evaluated using probabilistic finite element analysis. The experimental and numerical results of the cracking location showed that gravity effect on fresh concrete during setting time might affect the distribution of concrete cracking strength along the height of the structural elements.

  7. Schwarzschild tests of the Wahlquist-Estabrook-Buchman-Bardeen tetrad formulation for numerical relativity

    International Nuclear Information System (INIS)

    Buchman, L.T.; Bardeen, J.M.

    2005-01-01

    A first order symmetric hyperbolic tetrad formulation of the Einstein equations developed by Estabrook and Wahlquist and put into a form suitable for numerical relativity by Buchman and Bardeen (the WEBB formulation) is adapted to explicit spherical symmetry and tested for accuracy and stability in the evolution of spherically symmetric black holes (the Schwarzschild geometry). The lapse and shift, which specify the evolution of the coordinates relative to the tetrad congruence, are reset at frequent time intervals to keep the constant-time hypersurfaces nearly orthogonal to the tetrad congruence and the spatial coordinate satisfying a kind of minimal rate of strain condition. By arranging through initial conditions that the constant-time hypersurfaces are asymptotically hyperbolic, we simplify the boundary value problem and improve stability of the evolution. Results are obtained for both tetrad gauges ('Nester' and 'Lorentz') of the WEBB formalism using finite difference numerical methods. We are able to obtain stable unconstrained evolution with the Nester gauge for certain initial conditions, but not with the Lorentz gauge

  8. A rigorous proof of the Landau-Peierls formula and much more

    DEFF Research Database (Denmark)

    Briet, Philippe; Cornean, Horia; Savoie, Baptiste

    2012-01-01

    We present a rigorous mathematical treatment of the zero-field orbital magnetic susceptibility of a non-interacting Bloch electron gas, at fixed temperature and density, for both metals and semiconductors/insulators. In particular, we obtain the Landau-Peierls formula in the low temperature and d...... and density limit as conjectured by Kjeldaas and Kohn (Phys Rev 105:806–813, 1957)....

  9. Unmet Need: Improving mHealth Evaluation Rigor to Build the Evidence Base.

    Science.gov (United States)

    Mookherji, Sangeeta; Mehl, Garrett; Kaonga, Nadi; Mechael, Patricia

    2015-01-01

    mHealth-the use of mobile technologies for health-is a growing element of health system activity globally, but evaluation of those activities remains quite scant, and remains an important knowledge gap for advancing mHealth activities. In 2010, the World Health Organization and Columbia University implemented a small-scale survey to generate preliminary data on evaluation activities used by mHealth initiatives. The authors describe self-reported data from 69 projects in 29 countries. The majority (74%) reported some sort of evaluation activity, primarily nonexperimental in design (62%). The authors developed a 6-point scale of evaluation rigor comprising information on use of comparison groups, sample size calculation, data collection timing, and randomization. The mean score was low (2.4); half (47%) were conducting evaluations with a minimum threshold (4+) of rigor, indicating use of a comparison group, while less than 20% had randomized the mHealth intervention. The authors were unable to assess whether the rigor score was appropriate for the type of mHealth activity being evaluated. What was clear was that although most data came from mHealth projects pilots aimed for scale-up, few had designed evaluations that would support crucial decisions on whether to scale up and how. Whether the mHealth activity is a strategy to improve health or a tool for achieving intermediate outcomes that should lead to better health, mHealth evaluations must be improved to generate robust evidence for cost-effectiveness assessment and to allow for accurate identification of the contribution of mHealth initiatives to health systems strengthening and the impact on actual health outcomes.

  10. Estimation of the time since death--reconsidering the re-establishment of rigor mortis.

    Science.gov (United States)

    Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter

    2013-01-01

    In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.

  11. Implementation of rigorous renormalization group method for ground space and low-energy states of local Hamiltonians

    Science.gov (United States)

    Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.

    2017-12-01

    The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.

  12. The basic approach to age-structured population dynamics models, methods and numerics

    CERN Document Server

    Iannelli, Mimmo

    2017-01-01

    This book provides an introduction to age-structured population modeling which emphasises the connection between mathematical theory and underlying biological assumptions. Through the rigorous development of the linear theory and the nonlinear theory alongside numerics, the authors explore classical equations that describe the dynamics of certain ecological systems. Modeling aspects are discussed to show how relevant problems in the fields of demography, ecology, and epidemiology can be formulated and treated within the theory. In particular, the book presents extensions of age-structured modelling to the spread of diseases and epidemics while also addressing the issue of regularity of solutions, the asymptotic behaviour of solutions, and numerical approximation. With sections on transmission models, non-autonomous models and global dynamics, this book fills a gap in the literature on theoretical population dynamics. The Basic Approach to Age-Structured Population Dynamics will appeal to graduate students an...

  13. College Readiness in California: A Look at Rigorous High School Course-Taking

    Science.gov (United States)

    Gao, Niu

    2016-01-01

    Recognizing the educational and economic benefits of a college degree, education policymakers at the federal, state, and local levels have made college preparation a priority. There are many ways to measure college readiness, but one key component is rigorous high school coursework. California has not yet adopted a statewide college readiness…

  14. Rigor in Qualitative Supply Chain Management Research

    DEFF Research Database (Denmark)

    Goffin, Keith; Raja, Jawwad; Claes, Björn

    2012-01-01

    , reliability, and theoretical saturation. Originality/value – It is the authors' contention that the addition of the repertory grid technique to the toolset of methods used by logistics and supply chain management researchers can only enhance insights and the building of robust theories. Qualitative studies......Purpose – The purpose of this paper is to share the authors' experiences of using the repertory grid technique in two supply chain management studies. The paper aims to demonstrate how the two studies provided insights into how qualitative techniques such as the repertory grid can be made more...... rigorous than in the past, and how results can be generated that are inaccessible using quantitative methods. Design/methodology/approach – This paper presents two studies undertaken using the repertory grid technique to illustrate its application in supply chain management research. Findings – The paper...

  15. How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.

    Science.gov (United States)

    Gray, Kurt

    2017-09-01

    Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.

  16. Symbolic Number Comparison Is Not Processed by the Analog Number System: Different Symbolic and Non-symbolic Numerical Distance and Size Effects

    Directory of Open Access Journals (Sweden)

    Attila Krajcsi

    2018-02-01

    Full Text Available HIGHLIGHTSWe test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems.Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation.

  17. Evaluation of dynamic characteristics of hard rock based on numerical simulations of in situ rock tests

    International Nuclear Information System (INIS)

    Yamagami, Yuya; Ikusada, Koji; Jiang, Yujing

    2009-01-01

    In situ rock tests of hard rock of conglomerate in which discontinuities in high angle are dominant were conducted. In this study, in order to confirm the validity of the test results and the test condition, and in order to elucidate the deformation behaviour and the mechanism of shear strength of the rock mass, the numerical simulations of the in situ rock tests by using distinct element method were performed. As a result, it was clarified that the behaviour of the rock mass strongly depends on both geometrical distribution of discontinuities and those mechanical properties. It is thought that a series of evaluation processes showed in this study contribute to improve the reliability of the dynamic characteristic evaluation of the rock mass. (author)

  18. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  19. Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories

    Directory of Open Access Journals (Sweden)

    Augusto Beléndez

    2012-08-01

    Full Text Available There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW or the Rigorous Coupled Wave theory (RCW. The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.

  20. Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories

    Science.gov (United States)

    Gallego, Sergi; Neipp, Cristian; Estepa, Luis A.; Ortuño, Manuel; Márquez, Andrés; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto

    2012-01-01

    There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW) or the Rigorous Coupled Wave theory (RCW). The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.

  1. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    Energy Technology Data Exchange (ETDEWEB)

    Ngampitipan, Tritos, E-mail: tritos.ngampitipan@gmail.com [Faculty of Science, Chandrakasem Rajabhat University, Ratchadaphisek Road, Chatuchak, Bangkok 10900 (Thailand); Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Boonserm, Petarpa, E-mail: petarpa.boonserm@gmail.com [Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Chatrabhuti, Auttakit, E-mail: dma3ac2@gmail.com [Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Visser, Matt, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics, and Operations Research, Victoria University of Wellington, PO Box 600, Wellington 6140 (New Zealand)

    2016-06-02

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.

  2. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    International Nuclear Information System (INIS)

    Ngampitipan, Tritos; Boonserm, Petarpa; Chatrabhuti, Auttakit; Visser, Matt

    2016-01-01

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.

  3. Finger-Based Numerical Skills Link Fine Motor Skills to Numerical Development in Preschoolers.

    Science.gov (United States)

    Suggate, Sebastian; Stoeger, Heidrun; Fischer, Ursula

    2017-12-01

    Previous studies investigating the association between fine-motor skills (FMS) and mathematical skills have lacked specificity. In this study, we test whether an FMS link to numerical skills is due to the involvement of finger representations in early mathematics. We gave 81 pre-schoolers (mean age of 4 years, 9 months) a set of FMS measures and numerical tasks with and without a specific finger focus. Additionally, we used receptive vocabulary and chronological age as control measures. FMS linked more closely to finger-based than to nonfinger-based numerical skills even after accounting for the control variables. Moreover, the relationship between FMS and numerical skill was entirely mediated by finger-based numerical skills. We concluded that FMS are closely related to early numerical skill development through finger-based numerical counting that aids the acquisition of mathematical mental representations.

  4. Some comments on rigorous quantum field path integrals in the analytical regularization scheme

    Energy Technology Data Exchange (ETDEWEB)

    Botelho, Luiz C.L. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Matematica Aplicada]. E-mail: botelho.luiz@superig.com.br

    2008-07-01

    Through the systematic use of the Minlos theorem on the support of cylindrical measures on R{sup {infinity}}, we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)

  5. Some comments on rigorous quantum field path integrals in the analytical regularization scheme

    International Nuclear Information System (INIS)

    Botelho, Luiz C.L.

    2008-01-01

    Through the systematic use of the Minlos theorem on the support of cylindrical measures on R ∞ , we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)

  6. A plea for rigorous conceptual analysis as central method in transnational law design

    NARCIS (Netherlands)

    Rijgersberg, R.; van der Kaaij, H.

    2013-01-01

    Although shared problems are generally easily identified in transnational law design, it is considerably more difficult to design frameworks that transcend the peculiarities of local law in a univocal fashion. The following exposition is a plea for giving more prominence to rigorous conceptual

  7. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  8. Numerical Tests of the Cosmic Censorship Conjecture via Event-Horizon Finding

    Science.gov (United States)

    Okounkova, Maria; Ott, Christian; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    We present the current state of our research on the possibility of naked singularity formation in gravitational collapse, numerically testing both the cosmic censorship conjecture and the hoop conjecture. The former of these posits that all singularities lie behind an event horizon, while the later conjectures that this is true if collapse occurs from an initial configuration with all circumferences C <= 4 πM . We reconsider the classical Shapiro & Teukolsky (1991) prolate spheroid naked singularity scenario. Using the exponentially error-convergent Spectral Einstein Code (SpEC) we simulate the collapse of collisionless matter and probe for apparent horizons. We propose a new method to probe for the existence of an event horizon by following characteristic from regions near the singularity, using methods commonly employed in Cauchy characteristic extraction. This research was partially supported by NSF under Award No. PHY-1404569.

  9. Rigorous analysis of image force barrier lowering in bounded geometries: application to semiconducting nanowires

    International Nuclear Information System (INIS)

    Calahorra, Yonatan; Mendels, Dan; Epstein, Ariel

    2014-01-01

    Bounded geometries introduce a fundamental problem in calculating the image force barrier lowering of metal-wrapped semiconductor systems. In bounded geometries, the derivation of the barrier lowering requires calculating the reference energy of the system, when the charge is at the geometry center. In the following, we formulate and rigorously solve this problem; this allows combining the image force electrostatic potential with the band diagram of the bounded geometry. The suggested approach is applied to spheres as well as cylinders. Furthermore, although the expressions governing cylindrical systems are complex and can only be evaluated numerically, we present analytical approximations for the solution, which allow easy implementation in calculated band diagrams. The results are further used to calculate the image force barrier lowering of metal-wrapped cylindrical nanowires; calculations show that although the image force potential is stronger than that of planar systems, taking the complete band-structure into account results in a weaker effect of barrier lowering. Moreover, when considering small diameter nanowires, we find that the electrostatic effects of the image force exceed the barrier region, and influence the electronic properties of the nanowire core. This study is of interest to the nanowire community, and in particular for the analysis of nanowire I−V measurements where wrapped or omega-shaped metallic contacts are used. (paper)

  10. Turing patterns in parabolic systems of conservation laws and numerically observed stability of periodic waves

    Science.gov (United States)

    Barker, Blake; Jung, Soyeun; Zumbrun, Kevin

    2018-03-01

    Turing patterns on unbounded domains have been widely studied in systems of reaction-diffusion equations. However, up to now, they have not been studied for systems of conservation laws. Here, we (i) derive conditions for Turing instability in conservation laws and (ii) use these conditions to find families of periodic solutions bifurcating from uniform states, numerically continuing these families into the large-amplitude regime. For the examples studied, numerical stability analysis suggests that stable periodic waves can emerge either from supercritical Turing bifurcations or, via secondary bifurcation as amplitude is increased, from subcritical Turing bifurcations. This answers in the affirmative a question of Oh-Zumbrun whether stable periodic solutions of conservation laws can occur. Determination of a full small-amplitude stability diagram - specifically, determination of rigorous Eckhaus-type stability conditions - remains an interesting open problem.

  11. Numerical investigation of tube hyroforming of TWT using Corner Fill Test

    Science.gov (United States)

    Zribi, Temim; Khalfallah, Ali

    2018-05-01

    Tube hydroforming presents a very good alternative to conventional forming processes for obtaining good quality mechanical parts used in several industrial fields, such as the automotive and aerospace sectors. Research in the field of tube hydroforming is aimed at improving the formability, stiffness and weight reduction of manufactured parts using this process. In recent years, a new method of hydroforming appears; it consists of deforming parts made from welded tubes and having different thicknesses. This technique which contributes to the weight reduction of the hydroformed tubes is a good alternative to the conventional tube hydroforming. This technique makes it possible to build rigid and light structures with a reduced cost. However, it is possible to improve the weight reduction by using dissimilar tailor welded tubes (TWT). This paper is a first attempt to analyze by numerical simulation the behavior of TWT hydroformed in square cross section dies, commonly called (Corner Fill Test). Considered tubes are composed of two materials assembled by butt welding. The present analysis will focus on the effect of loading paths on the formability of the structure by determining the change in thickness in several sections of the part. A comparison between the results obtained by hydroforming the butt joint of tubes made of dissimilar materials and those obtained using single-material tube is achieved. Numerical calculations show that the bi-material welded tube has better thinning resistance and a more even thickness distribution in the circumferential directions when compared to the single-material tube.

  12. Supersymmetry and the Parisi-Sourlas dimensional reduction: A rigorous proof

    International Nuclear Information System (INIS)

    Klein, A.; Landau, L.J.; Perez, J.F.

    1984-01-01

    Functional integrals that are formally related to the average correlation functions of a classical field theory in the presence of random external sources are given a rigorous meaning. Their dimensional reduction to the Schwinger functions of the corresponding quantum field theory in two fewer dimensions is proven. This is done by reexpressing those functional integrals as expectations of a supersymmetric field theory. The Parisi-Sourlas dimensional reduction of a supersymmetric field theory to a usual quantum field theory in two fewer dimensions is proven. (orig.)

  13. Numerical simulation of time-dependent deformations under hygral and thermal transient conditions

    International Nuclear Information System (INIS)

    Roelfstra, P.E.

    1987-01-01

    Some basic concepts of numerical simulation of the formation of the microstructure of HCP are outlined. The aim is to replace arbitrary terms like aging by more realistic terms like bond density in the xerogel and bonds between hydrating particles of HCP. Actual state parameters such as temperature, humidity and degree of hydration can be determined under transient hygral and thermal conditions by solving numerically a series of appropriate coupled differential equations with given boundary conditions. Shrinkage of a composite structure without crack formation, based on calculated moisture distributions, has been determined with numerical concrete codes. The influence of crack formation, tensile strain-hardening and softening on the total deformation of a quasi-homogeneous drying material has been studied by means of model based on FEM. The difference between shrinkage without crack formation and shrinkage with crack formation can be quantified. Drying shrinkage and creep of concrete cannot be separated. The total deformation depends on the superimposed stress fields. Transient hygral deformation can be realistically predicted if the concept of point properties is applied rigorously. Transient thermal deformation has to be dealt with in the same way. (orig./HP)

  14. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Science.gov (United States)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  15. Efficiency versus speed in quantum heat engines: Rigorous constraint from Lieb-Robinson bound

    Science.gov (United States)

    Shiraishi, Naoto; Tajima, Hiroyasu

    2017-08-01

    A long-standing open problem whether a heat engine with finite power achieves the Carnot efficiency is investgated. We rigorously prove a general trade-off inequality on thermodynamic efficiency and time interval of a cyclic process with quantum heat engines. In a first step, employing the Lieb-Robinson bound we establish an inequality on the change in a local observable caused by an operation far from support of the local observable. This inequality provides a rigorous characterization of the following intuitive picture that most of the energy emitted from the engine to the cold bath remains near the engine when the cyclic process is finished. Using this description, we prove an upper bound on efficiency with the aid of quantum information geometry. Our result generally excludes the possibility of a process with finite speed at the Carnot efficiency in quantum heat engines. In particular, the obtained constraint covers engines evolving with non-Markovian dynamics, which almost all previous studies on this topic fail to address.

  16. Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II

    Energy Technology Data Exchange (ETDEWEB)

    George J. Koperna Jr.; Vello A. Kuuskraa; David E. Riestenberg; Aiysha Sultana; Tyler Van Leeuwen

    2009-06-01

    This report serves as the final technical report and users manual for the 'Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II SBIR project. Advanced Resources International has developed a screening tool by which users can technically screen, assess the storage capacity and quantify the costs of CO2 storage in four types of CO2 storage reservoirs. These include CO2-enhanced oil recovery reservoirs, depleted oil and gas fields (non-enhanced oil recovery candidates), deep coal seems that are amenable to CO2-enhanced methane recovery, and saline reservoirs. The screening function assessed whether the reservoir could likely serve as a safe, long-term CO2 storage reservoir. The storage capacity assessment uses rigorous reservoir simulation models to determine the timing, ultimate storage capacity, and potential for enhanced hydrocarbon recovery. Finally, the economic assessment function determines both the field-level and pipeline (transportation) costs for CO2 sequestration in a given reservoir. The screening tool has been peer reviewed at an Electrical Power Research Institute (EPRI) technical meeting in March 2009. A number of useful observations and recommendations emerged from the Workshop on the costs of CO2 transport and storage that could be readily incorporated into a commercial version of the Screening Tool in a Phase III SBIR.

  17. Post mortem rigor development in the Egyptian goose (Alopochen aegyptiacus) breast muscle (pectoralis): factors which may affect the tenderness.

    Science.gov (United States)

    Geldenhuys, Greta; Muller, Nina; Frylinck, Lorinda; Hoffman, Louwrens C

    2016-01-15

    Baseline research on the toughness of Egyptian goose meat is required. This study therefore investigates the post mortem pH and temperature decline (15 min-4 h 15 min post mortem) in the pectoralis muscle (breast portion) of this gamebird species. It also explores the enzyme activity of the Ca(2+)-dependent protease (calpain system) and the lysosomal cathepsins during the rigor mortis period. No differences were found for any of the variables between genders. The pH decline in the pectoralis muscle occurs quite rapidly (c = -0.806; ultimate pH ∼ 5.86) compared with other species and it is speculated that the high rigor temperature (>20 °C) may contribute to the increased toughness. No calpain I was found in Egyptian goose meat and the µ/m-calpain activity remained constant during the rigor period, while a decrease in calpastatin activity was observed. The cathepsin B, B & L and H activity increased over the rigor period. Further research into the connective tissue content and myofibrillar breakdown during aging is required in order to know if the proteolytic enzymes do in actual fact contribute to tenderisation. © 2015 Society of Chemical Industry.

  18. Prediction of the Individual Wave Overtopping Volumes of a Wave Energy Converter using Experimental Testing and First Numerical Model Results

    DEFF Research Database (Denmark)

    Victor, L.; Troch, P.; Kofoed, Jens Peter

    2009-01-01

    For overtopping wave energy converters (WECs) a more efficient energy conversion can be achieved when the volumes of water, wave by wave, that enter their reservoir are known and can be predicted. A numerical tool is being developed using a commercial CFD-solver to study and optimize...... nearshore 2Dstructure. First numerical model results are given for a specific test with regular waves, and are compared with the corresponding experimental results in this paper....

  19. RIGOROUS PHOTOGRAMMETRIC PROCESSING OF CHANG'E-1 AND CHANG'E-2 STEREO IMAGERY FOR LUNAR TOPOGRAPHIC MAPPING

    OpenAIRE

    K. Di; Y. Liu; B. Liu; M. Peng

    2012-01-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D c...

  20. A simple and rational numerical method of two-phase flow with volume-junction model. 2. The numerical method for general condition of two-phase flow in non-equilibrium states

    International Nuclear Information System (INIS)

    Okazaki, Motoaki

    1997-11-01

    In the previous report, the usefulness of a new numerical method to achieve a rigorous numerical calculation using a simple explicit method with the volume-junction model was presented with the verification calculation for the depressurization of a saturated two-phase mixture. In this report, on the basis of solution method above, a numerical method for general condition of two-phase flow in non-equilibrium states is presented. In general condition of two-phase flow, the combinations of saturated and non-saturated conditions of each phase are considered in the each flow of volume and junction. Numerical evaluation programs are separately prepared for each combination of flow condition. Several numerical calculations of various kinds of non-equilibrium two-phase flow are made to examine the validity of the numerical method. Calculated results showed that the thermodynamic states obtained in different solution schemes were consistent with each other. In the first scheme, the states are determined by using the steam table as a function of pressure and specific enthalpy which are obtained as the solutions of simultaneous equations. In the second scheme, density and specific enthalpy of each phase are directly calculated by using conservation equations of mass and enthalpy of each phase, respectively. Further, no accumulation of error in mass and energy was found. As for the specific enthalpy, two cases of using energy equations for the volume are examined. The first case uses total energy conservation equation and the second case uses the type of the first law of thermodynamics. The results of both cases agreed well. (author)

  1. Large-scale thermal convection of viscous fluids in a faulted system: 3D test case for numerical codes

    Science.gov (United States)

    Magri, Fabien; Cacace, Mauro; Fischer, Thomas; Kolditz, Olaf; Wang, Wenqing; Watanabe, Norihiro

    2017-04-01

    In contrast to simple homogeneous 1D and 2D systems, no appropriate analytical solutions exist to test onset of thermal convection against numerical models of complex 3D systems that account for variable fluid density and viscosity as well as permeability heterogeneity (e.g. presence of faults). Owing to the importance of thermal convection for the transport of energy and minerals, the development of a benchmark test for density/viscosity driven flow is crucial to ensure that the applied numerical models accurately simulate the physical processes at hands. The presented study proposes a 3D test case for the simulation of thermal convection in a faulted system that accounts for temperature dependent fluid density and viscosity. The linear stability analysis recently developed by Malkovsky and Magri (2016) is used to estimate the critical Rayleigh number above which thermal convection of viscous fluids is triggered. The numerical simulations are carried out using the finite element technique. OpenGeoSys (Kolditz et al., 2012) and Moose (Gaston et al., 2009) results are compared to those obtained using the commercial software FEFLOW (Diersch, 2014) to test the ability of widely applied codes in matching both the critical Rayleigh number and the dynamical features of convective processes. The methodology and Rayleigh expressions given in this study can be applied to any numerical model that deals with 3D geothermal processes in faulted basins as by example the Tiberas Basin (Magri et al., 2016). References Kolditz, O., Bauer, S., Bilke, L., Böttcher, N., Delfs, J. O., Fischer, T., U. J. Görke, T. Kalbacher, G. Kosakowski, McDermott, C. I., Park, C. H., Radu, F., Rink, K., Shao, H., Shao, H.B., Sun, F., Sun, Y., Sun, A., Singh, K., Taron, J., Walther, M., Wang,W., Watanabe, N., Wu, Y., Xie, M., Xu, W., Zehner, B., 2012. OpenGeoSys: an open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Environmental

  2. A numerical test of the collective coordinate method

    International Nuclear Information System (INIS)

    Dobrowolski, T.; Tatrocki, P.

    2008-01-01

    The purpose of this Letter is to compare the dynamics of the kink interacting with the imperfection which follows from the collective coordinate method with the numerical results obtained on the ground of the field theoretical model. We showed that for weekly interacting kinks the collective coordinate method works similarly well for low and extremely large speeds

  3. 75 FR 29732 - Career and Technical Education Program-Promoting Rigorous Career and Technical Education Programs...

    Science.gov (United States)

    2010-05-27

    ... rigorous knowledge and skills in English- language arts and mathematics that employers and colleges expect... specialists and to access the student outcome data needed to meet annual evaluation and reporting requirements...

  4. Rigorous Line-Based Transformation Model Using the Generalized Point Strategy for the Rectification of High Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Kun Hu

    2016-09-01

    Full Text Available High precision geometric rectification of High Resolution Satellite Imagery (HRSI is the basis of digital mapping and Three-Dimensional (3D modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.

  5. Numerical analysis targets

    International Nuclear Information System (INIS)

    Sollogoub, Pierre

    2001-01-01

    Numerical analyses are needed in different steps of the overall design process. Complex models or non-linear reactor core behaviour are important for qualification and/or comparison of results obtained. Adequate models and test should be defined. Fuel assembly, fuel row, and the complete core should be tested for seismic effects causing LOCA and flow-induced vibrations (FIV)

  6. Development of an in-plane biaxial test for forming limit curve (FLC) characterization of metallic sheets

    International Nuclear Information System (INIS)

    Zidane, I; Guines, D; Léotoing, L; Ragneau, E

    2010-01-01

    The main objective of this work is to propose a new experimental device able to give for a single specimen a good prediction of rheological parameters and formability under static and dynamic conditions (for intermediate strain rates). In this paper, we focus on the characterization of sheet metal forming. The proposed device is a servo-hydraulic testing machine provided with four independent dynamic actuators allowing biaxial tensile tests on cruciform specimens. The formability is evaluated thanks to the classical forming limit diagram (FLD), and one of the difficulties of this study was the design of a dedicated specimen for which the necking phenomenon appears in its central zone. If necking is located in the central zone of the specimen, then the speed ratio between the two axes controls the strain path in this zone and a whole forming limit curve can be covered. Such a specimen is proposed through a numerical and experimental validation procedure. A rigorous procedure for the detection of numerical and experimental forming strains is also presented. Finally, an experimental forming limit curve is determined and validated for an aluminium alloy dedicated to the sheet forming processes (AA5086)

  7. Numerical simulations of tests masonry walls from ceramic block using a detailed finite element model

    Directory of Open Access Journals (Sweden)

    V. Salajka

    2017-01-01

    Full Text Available This article deals with an analysis of the behaviour of brick ceramic walls. The behaviour of the walls was analysed experimentally in order to obtain their bearing capacity under static loading and their seismic resistance. Simultaneously, numerical simulations of the experiments were carried out in order to obtain additional information on the behaviour of masonry walls made of ceramic blocks. The results of the geometrically and materially nonlinear computations were compared to the results of the performed tests.

  8. Numerical-experimental investigation of load paths in DP800 dual phase steel during Nakajima test

    Science.gov (United States)

    Bergs, Thomas; Nick, Matthias; Feuerhack, Andreas; Trauth, Daniel; Klocke, Fritz

    2018-05-01

    Fuel efficiency requirements demand lightweight construction of vehicle body parts. The usage of advanced high strength steels permits a reduction of sheet thickness while still maintaining the overall strength required for crash safety. However, damage, internal defects (voids, inclusions, micro fractures), microstructural defects (varying grain size distribution, precipitates on grain boundaries, anisotropy) and surface defects (micro fractures, grooves) act as a concentration point for stress and consequently as an initiation point for failure both during deep drawing and in service. Considering damage evolution in the design of car body deep drawing processes allows for a further reduction in material usage and therefore body weight. Preliminary research has shown that a modification of load paths in forming processes can help mitigate the effects of damage on the material. This paper investigates the load paths in Nakajima tests of a DP800 dual phase steel to research damage in deep drawing processes. Investigation is done via a finite element model using experimentally validated material data for a DP800 dual phase steel. Numerical simulation allows for the investigation of load paths with respect to stress states, strain rates and temperature evolution, which cannot be easily observed in physical experiments. Stress triaxiality and the Lode parameter are used to describe the stress states. Their evolution during the Nakajima tests serves as an indicator for damage evolution. The large variety of sheet metal forming specific load paths in Nakajima tests allows a comprehensive evaluation of damage for deep drawing. The results of the numerical simulation conducted in this project and further physical experiments will later be used to calibrate a damage model for simulation of deep drawing processes.

  9. Numerical simulation of flood barriers

    Science.gov (United States)

    Srb, Pavel; Petrů, Michal; Kulhavý, Petr

    This paper deals with testing and numerical simulating of flood barriers. The Czech Republic has been hit by several very devastating floods in past years. These floods caused several dozens of causalities and property damage reached billions of Euros. The development of flood measures is very important, especially for the reduction the number of casualties and the amount of property damage. The aim of flood control measures is the detention of water outside populated areas and drainage of water from populated areas as soon as possible. For new flood barrier design it is very important to know its behaviour in case of a real flood. During the development of the barrier several standardized tests have to be carried out. Based on the results from these tests numerical simulation was compiled using Abaqus software and some analyses were carried out. Based on these numerical simulations it will be possible to predict the behaviour of barriers and thus improve their design.

  10. Rigorous patient-prosthesis matching of Perimount Magna aortic bioprosthesis.

    Science.gov (United States)

    Nakamura, Hiromasa; Yamaguchi, Hiroki; Takagaki, Masami; Kadowaki, Tasuku; Nakao, Tatsuya; Amano, Atsushi

    2015-03-01

    Severe patient-prosthesis mismatch, defined as effective orifice area index ≤0.65 cm(2) m(-2), has demonstrated poor long-term survival after aortic valve replacement. Reported rates of severe mismatch involving the Perimount Magna aortic bioprosthesis range from 4% to 20% in patients with a small annulus. Between June 2008 and August 2011, 251 patients (mean age 70.5 ± 10.2 years; mean body surface area 1.55 ± 0.19 m(2)) underwent aortic valve replacement with a Perimount Magna bioprosthesis, with or without concomitant procedures. We performed our procedure with rigorous patient-prosthesis matching to implant a valve appropriately sized to each patient, and carried out annular enlargement when a 19-mm valve did not fit. The bioprosthetic performance was evaluated by transthoracic echocardiography predischarge and at 1 and 2 years after surgery. Overall hospital mortality was 1.6%. Only 5 (2.0%) patients required annular enlargement. The mean follow-up period was 19.1 ± 10.7 months with a 98.4% completion rate. Predischarge data showed a mean effective orifice area index of 1.21 ± 0.20 cm(2) m(-2). Moderate mismatch, defined as effective orifice area index ≤0.85 cm(2) m(-2), developed in 4 (1.6%) patients. None developed severe mismatch. Data at 1 and 2 years showed only two cases of moderate mismatch; neither was severe. Rigorous patient-prosthesis matching maximized the performance of the Perimount Magna, and no severe mismatch resulted in this Japanese population of aortic valve replacement patients. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  11. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. Criteria for the reliability of numerical approximations to the solution of fluid flow problems

    International Nuclear Information System (INIS)

    Foias, C.

    1986-01-01

    The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs

  13. Reactor numerical simulation and hydraulic test research

    International Nuclear Information System (INIS)

    Yang, L. S.

    2009-01-01

    In recent years, the computer hardware was improved on the numerical simulation on flow field in the reactor. In our laboratory, we usually use the Pro/e or UG commercial software. After completed topology geometry, ICEM-CFD is used to get mesh for computation. Exact geometrical similarity is maintained between the main flow paths of the model and the prototype, with the exception of the core simulation design of the fuel assemblies. The drive line system is composed of drive mechanism, guide bush assembly, fuel assembly and control rod assembly, and fitted with the rod level indicator and drive mechanism power device

  14. Bringing scientific rigor to community-developed programs in Hong Kong

    Directory of Open Access Journals (Sweden)

    Fabrizio Cecilia S

    2012-12-01

    Full Text Available Abstract Background This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR. Methods The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Results Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. Conclusions The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.

  15. Desarrollo constitucional, legal y jurisprudencia del principio de rigor subsidiario

    Directory of Open Access Journals (Sweden)

    Germán Eduardo Cifuentes Sandoval

    2013-09-01

    Full Text Available In colombia the environment state administration is in charge of environmental national system, SINA, SINA is made up of states entities that coexist beneath a mixed organization of centralization and decentralization. SINA decentralization express itself in a administrative and territorial level, and is waited that entities that function under this structure act in a coordinated way in order to reach suggested objectives in the environmental national politicy. To achieve the coordinated environmental administration through entities that define the SINA, the environmental legislation of Colombia has include three basic principles: 1. The principle of “armorial regional” 2. The principle of “gradationnormative” 3. The principle of “rigorsubsidiaries”. These principles belong to the article 63, law 99 of 1933, and even in the case of the two first, it is possible to find equivalents in other norms that integrate the Colombian legal system, it does not happen in that way with the “ rigor subsidiaries” because its elements are uniques of the environmental normativity and do not seem to be similar to those that make part of the principle of “ subsidiaridad” present in the article 288 of the politic constitution. The “ rigor subsidiaries” give to decentralizates entities certain type of special ability to modify the current environmental legislation to defend the local ecological patrimony. It is an administrative ability with a foundation in the decentralization autonomy that allows to take place of the reglamentary denied of the legislative power with the condition that the new normativity be more demanding that the one that belongs to the central level

  16. Bringing scientific rigor to community-developed programs in Hong Kong.

    Science.gov (United States)

    Fabrizio, Cecilia S; Hirschmann, Malia R; Lam, Tai Hing; Cheung, Teresa; Pang, Irene; Chan, Sophia; Stewart, Sunita M

    2012-12-31

    This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR). The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.

  17. Numerical Model to Quantify the Influence of the Cellulosic Substrate on the Ignition Propensity Tests

    Directory of Open Access Journals (Sweden)

    Guindos Pablo

    2016-07-01

    Full Text Available A numerical model based on the finite element method has been constructed to simulate the ignition propensity (IP tests. The objective of this mathematical model was to quantify the influence of different characteristics of the cellulosic substrate on the results of the IP-tests. The creation and validation of the model included the following steps: (I formulation of the model based on experimental thermodynamic characteristics of the cellulosic substrate; (ii calibration of the model according to cone calorimeter tests; (iii validation of the model through mass loss and temperature profiling during IP-testing. Once the model was validated, the influence of each isolated parameter of the cellulosic substrate was quantified via a parametric study. The results revealed that the substrate heat capacity, the cigarette temperature and the pyrolysis activation energy are the most influencing parameters on the thermodynamic response of the substrates, while other parameters like heat of the pyrolysis reaction, density and roughness of the substrate showed little influence. Also the results indicated that the thermodynamic mechanisms involved in the pyrolysis and combustion of the cellulosic substrate are complex and show low repeatability which might impair the reliability of the IP-tests.

  18. Numerical Transducer Modeling

    DEFF Research Database (Denmark)

    Henriquez, Vicente Cutanda

    This thesis describes the development of a numerical model of the propagation of sound waves in fluids with viscous and thermal losses, with application to the simulation of acoustic transducers, in particular condenser microphones for measurement. The theoretical basis is presented, numerical...... manipulations are developed to satisfy the more complicated boundary conditions, and a model of a condenser microphone with a coupled membrane is developed. The model is tested against measurements of ¼ inch condenser microphones and analytical calculations. A detailed discussion of the results is given....

  19. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    International Nuclear Information System (INIS)

    Meyers, M.D.; Huang, C.-K.; Zeng, Y.; Yi, S.A.; Albright, B.J.

    2015-01-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models

  20. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    Science.gov (United States)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  1. PORFLO - a continuum model for fluid flow, heat transfer, and mass transport in porous media. Model theory, numerical methods, and computational tests

    International Nuclear Information System (INIS)

    Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.

    1985-09-01

    Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs

  2. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  3. Post-test comparison of thermal-hydrologic measurements and numerical predictions for the in situ single heater test, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Ballard, S.; Francis, N.D.; Sobolik, S.R.; Finley, R.E.

    1998-01-01

    The Single Heater Test (SHT) is a sixteen-month-long heating and cooling experiment begun in August, 1996, located underground within the unsaturated zone near the potential geologic repository at Yucca Mountain, Nevada. During the 9 month heating phase of the test, roughly 15 m 3 of rock were raised to temperatures exceeding 100 C. In this paper, temperatures measured in sealed boreholes surrounding the heater are compared to temperatures predicted by 3D thermal-hydrologic calculations performed with a finite difference code. Three separate model runs using different values of bulk rock permeability (4 microdarcy to 5.2 darcy) yielded significantly different predicted temperatures and temperature distributions. All the models differ from the data, suggesting that to accurately model the thermal-hydrologic behavior of the SHT, the Equivalent Continuum Model (ECM), the conceptual basis for dealing with the fractured porous medium in the numerical predictions, should be discarded in favor of more sophisticated approaches

  4. Numerical modeling of Thermal Response Tests in Energy Piles

    Science.gov (United States)

    Franco, A.; Toledo, M.; Moffat, R.; Herrera, P. A.

    2013-05-01

    conductivity of the soil is the most determinant parameter that affects the estimated thermal conductivity. For example, we observed differences of up to 50% from the expected value at the end of 100 hours of simulation for values of thermal conductivity of the soil in the range of 1 to 6 W/mK. Additionally, we observed that the results of the synthetic TRT depend upon several other parameters such as the boundary conditions used to model the interaction of the top face of the pile with the surrounding media. For example, Simulations with a constant temperature boundary condition tended to overestimate the total thermal conductivity of the whole system. This analysis demonstrates that numerical modeling is a useful tool to model energy pile systems and to interpret and design tests to evaluate their performance. Furthermore, it also reveals that the results of thermal response tests interpreted with analytical models must be evaluated with care for the assessment of the potential of low enthalpy systems, because their results depend upon a variety of factors which are neglected in the analytical models.

  5. The influence of low temperature, type of muscle and electrical stimulation on the course of rigor mortis, ageing and tenderness of beef muscles.

    Science.gov (United States)

    Olsson, U; Hertzman, C; Tornberg, E

    1994-01-01

    The course of rigor mortis, ageing and tenderness have been evaluated for two beef muscles, M. semimembranosus (SM) and M. longissimus dorsi (LD), when entering rigor at constant temperatures in the cold-shortening region (1, 4, 7 and 10°C). The influence of electrical stimulation (ES) was also examined. Post-mortem changes were registered by shortening and isometric tension and by following the decline of pH, ATP and creatine phosphate. The effect of ageing on tenderness was recorded by measuring shear-force (2, 8 and 15 days post mortem) and the sensory properties were assessed 15 days post mortem. It was found that shortening increased with decreasing temperature, resulting in decreased tenderness. Tenderness for LD, but not for SM, was improved by ES at 1 and 4°C, whereas ES did not give rise to any decrease in the degree of shortening during rigor mortis development. This suggests that ES influences tenderization more than it prevents cold-shortening. The samples with a pre-rigor mortis temperature of 1°C could not be tenderized, when stored up to 15 days, whereas this was the case for the muscles entering rigor mortis at the other higher temperatures. The results show that under the conditions used in this study, the course of rigor mortis is more important for the ultimate tenderness than the course of ageing. Copyright © 1994. Published by Elsevier Ltd.

  6. Rigor index, fillet yield and proximate composition of cultured striped catfish (Pangasianodon hypophthalmus for its suitability in processing industries in Bangladesh

    Directory of Open Access Journals (Sweden)

    Salma Noor-E Islami

    2014-12-01

    Full Text Available Rigor-index in market-size striped catfish (Pangasianodon hypophthalmus, locally called Thai-Pangas was determined to assess fillet yield for production of value-added products. In whole fish, rigor started within 1 hr after death under both iced and room temperature conditions while rigor-index reached a maximum of 72.23% within 8 hr and 85.5% within 5 hr at room temperature and iced condition, respectively, which was fully relaxed after 22 hr under both storage conditions. Post-mortem muscle pH decreased to 6.8 after 2 hr, 6.2 after 8 hr and sharp increase to 6.9 after 9 hr. There was a positive correlation between rigor progress and pH shift in fish fillets. Hand filleting was done post-rigor and fillet yield experiment showed 50.4±2.1% fillet, 8.0±0.2% viscera, 8.0±1.3% skin and 32.0±3.2% carcass could be obtained from Thai-Pangas. Proximate composition analysis of four regions of Thai-Pangas viz., head region, middle region, tail region and viscera revealed moisture 78.36%, 81.14%, 81.45% and 57.33%; protein 15.83%, 15.97%, 16.14% and 17.20%; lipid 4.61%, 1.82%, 1.32% and 24.31% and ash 1.09%, 0.96%, 0.95% and 0.86%, respectively indicating suitability of Thai-Pangas for production of value-added products such as fish fillets.

  7. Tests of a numerical algorithm for the linear instability study of flows on a sphere

    Energy Technology Data Exchange (ETDEWEB)

    Perez Garcia, Ismael; Skiba, Yuri N [Univerisidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)

    2001-04-01

    A numerical algorithm for the normal mode instability of a steady nondivergent flow on a rotating sphere is developed. The algorithm accuracy is tested with zonal solutions of the nonlinear barotropic vorticity equation (Legendre polynomials, zonal Rossby-Harwitz waves and monopole modons). [Spanish] Ha sido desarrollado un algoritmo numerico para estudiar la inestabilidad lineal de un flujo estacionario no divergente en una esfera en rotacion. La precision del algoritmo se prueba con soluciones zonales de la ecuacion no lineal de vorticidad barotropica (polinomios de Legendre, ondas zonales Rossby-Harwitz y modones monopolares).

  8. Influência do estresse causado pelo transporte e método de abate sobre o rigor mortis do tambaqui (Colossoma macropomum

    Directory of Open Access Journals (Sweden)

    Joana Maia Mendes

    2015-06-01

    Full Text Available ResumoO presente trabalho avaliou a influência do estresse pré-abate e do método de abate sobre o rigor mortis do tambaqui durante armazenamento em gelo. Foram estudadas respostas fisiológicas do tambaqui ao estresse durante o pré-abate, que foi dividido em quatro etapas: despesca, transporte, recuperação por 24 h e por 48 h. Ao final de cada etapa, os peixes foram amostrados para caracterização do estresse pré-abate por meio de análises dos parâmetros plasmáticos de glicose, lactato e amônia e, em seguida, os peixes foram abatidos por hipotermia ou por asfixia com gás carbônico para o estudo do rigor mortis. Verificou-se que o estado fisiológico de estresse dos peixes foi mais agudo logo após o transporte, implicando numa entrada em rigor mortis mais rápida: 60 minutos para tambaquis abatidos por hipotermia e 120 minutos para tambaquis abatidos por asfixia com gás carbônico. Nos viveiros, os peixes abatidos logo após a despesca apresentaram estado de estresse intermediário, sem diferença no tempo de entrada em rigor mortis em relação ao método de abate (135 minutos. Os peixes que passaram por recuperação ao estresse causado pelo transporte em condições simuladas de indústria apresentaram entrada em rigor mortis mais tardia: 225 minutos (com 24 h de recuperação e 255 minutos (com 48 h de recuperação, igualmente sem diferença em relação aos métodos de abate testados. A resolução do rigor mortis foi mais rápida nos peixes abatidos após o transporte, que foi de 12 dias. Nos peixes abatidos logo após a despesca, a resolução ocorreu com 16 dias e, nos peixes abatidos após recuperação, com 20 dias para 24 h de recuperação ao estresse pré-abate e 24 dias para 48 h de recuperação, sem influência do método de abate na resolução do rigor mortis. Assim, é desejável que o abate do tambaqui destinado à indústria seja feito após período de recuperação ao estresse, com vistas a aumentar sua

  9. Numerical investigation for combustion characteristics of vacuum residue (VR) in a test furnace

    International Nuclear Information System (INIS)

    Sreedhara, S.; Huh, Kang Y.; Park, Hoyoung

    2007-01-01

    It has become inevitable to search for alternative fuels due to current worldwide energy crisis. In this paper combustion characteristics of vacuum residue (VR) is investigated numerically against experimental data in typical operating conditions of a furnace. Heat release reaction is modeled as sequential steps of devolatilization, simplified gas phase reaction and char oxidation as for pulverized coal. Thermal and fuel NO are predicted by the conditional moment closure (CMC) method for estimation of elementary reaction rates. It turns out that Sauter mean diameter (SMD) of VR droplets is a crucial parameter for better combustion efficiency and lower NO. Reasonable agreement is achieved for spatial distributions of major species, temperature and NO for all test cases with different fuel and steam flow rates

  10. Numerical study of thermal test of a cask of transportation for radioactive material; Estudo numérico do ensaio térmico de um casco de transporte para material radioativo

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Tiago A.S.; Santos, André A.C. dos; Vidal, Guilherme A.M.; Silva Junior, Geraldo E., E-mail: tiago.vieira.eng@gmail.com, E-mail: gvidal.ufmg@gmail.com, E-mail: aacs@cdtn.br, E-mail: geraldo.esilva@yahoo.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    In this study numerical simulations of a transport cask for radioactive material were made and the numerical results were compared with experimental results of tests carried out in two different opportunities. A mesh study was also made regarding the previously designed geometry of the same cask, in order to evaluate its impact in relation to the stability of numerical results for this type of problem. The comparison of the numerical and experimental results allowed to evaluate the need to plan and carry out a new test in order to validate the CFD codes used in the numerical simulations.

  11. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications

    Directory of Open Access Journals (Sweden)

    Vassilis Gikas

    2016-08-01

    Full Text Available With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS, Inertial Measurement Unit (IMU and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness capabilities (i.e., potential and limitations based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android and iPhone 5s (iOS. Our findings indicate that the deviation of the smartphone locations from ground truth (trueness deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of

  12. Numerical modelling of fire propagation: principles and applications at Electricite de France

    International Nuclear Information System (INIS)

    Rongere, F.X.; Gibault, J.

    1994-05-01

    Electricite de France, wishing to limit the accidental unavailability of its nuclear plants and to ensure their safety rigorously takes particular care to reduce the risk of fire. In this context, the Heat Transfer and Aerodynamics Branch of the Research and Development Division has been in charge of the design of numerical tools to simulate the fire propagation in buildings since 1985. Its program is articulated towards three axes which include : the development of the MAGIC software program, the characterization of the combustibles present in power plants, the development of methods for the use of the computer codes in the design of plants. This paper gives on overview of the activity in progress in this research fields. It illustrates also the applications performed and anticipated at Electricite de France of the numerical simulation in fire safety design. We discuss at the end of it the limitations and the development factors of these tool use. One of the later is the association of MAGIC software and the FIVE method. (authors). 15 refs., 10 figs., 2 tabs

  13. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    Science.gov (United States)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  14. Born approximation to a perturbative numerical method for the solution of the Schrodinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-05-01

    A perturbative numerical (PN) method is given for the solution of a regular one-dimensional Cauchy problem arising from the Schroedinger equation. The present method uses a step function approximation for the potential. Global, free of scaling difficulty, forward and backward PN algorithms are derived within first order perturbation theory (Born approximation). A rigorous analysis of the local truncation errors is performed. This shows that the order of accuracy of the method is equal to four. In between the mesh points, the global formula for the wavefunction is accurate within O(h 4 ), while that for the first order derivative is accurate within O(h 3 ). (author)

  15. Managing the Testing Process Practical Tools and Techniques for Managing Hardware and Software Testing

    CERN Document Server

    Black, Rex

    2011-01-01

    New edition of one of the most influential books on managing software and hardware testing In this new edition of his top-selling book, Rex Black walks you through the steps necessary to manage rigorous testing programs of hardware and software. The preeminent expert in his field, Mr. Black draws upon years of experience as president of both the International and American Software Testing Qualifications boards to offer this extensive resource of all the standards, methods, and tools you'll need. The book covers core testing concepts and thoroughly examines the best test management practices

  16. A Rigorous Treatment of Energy Extraction from a Rotating Black Hole

    Science.gov (United States)

    Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.

    2009-05-01

    The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.

  17. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  18. Seizing the Future: How Ohio's Career-Technical Education Programs Fuse Academic Rigor and Real-World Experiences to Prepare Students for College and Careers

    Science.gov (United States)

    Guarino, Heidi; Yoder, Shaun

    2015-01-01

    "Seizing the Future: How Ohio's Career and Technical Education Programs Fuse Academic Rigor and Real-World Experiences to Prepare Students for College and Work," demonstrates Ohio's progress in developing strong policies for career and technical education (CTE) programs to promote rigor, including college- and career-ready graduation…

  19. Effect of muscle restraint on sheep meat tenderness with rigor mortis at 18°C.

    Science.gov (United States)

    Devine, Carrick E; Payne, Steven R; Wells, Robyn W

    2002-02-01

    The effect on shear force of skeletal restraint and removing muscles from lamb m. longissimus thoracis et lumborum (LT) immediately after slaughter and electrical stimulation was undertaken at a rigor temperature of 18°C (n=15). The temperature of 18°C was achieved through chilling of electrically stimulated sheep carcasses in air at 12°C, air flow 1-1.5 ms(-2). In other groups, the muscle was removed at 2.5 h post-mortem and either wrapped or left non-wrapped before being placed back on the carcass to follow carcass cooling regimes. Following rigor mortis, the meat was aged for 0, 16, 40 and 65 h at 15°C and frozen. For the non-stimulated samples, the meat was aged for 0, 12, 36 and 60 h before being frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1 × 1 cm cross-section. Commencement of ageing was considered to take place at rigor mortis and this was taken as zero aged meat. There were no significant differences in the rate of tenderisation and initial shear force for all treatments. The 23% cook loss was similar for all wrapped and non-wrapped situations and the values decreased slightly with longer ageing durations. Wrapping was shown to mimic meat left intact on the carcass, as it prevented significant prerigor shortening. Such techniques allows muscles to be removed and placed in a controlled temperature environment to enable precise studies of ageing processes.

  20. EarthLabs Modules: Engaging Students In Extended, Rigorous Investigations Of The Ocean, Climate and Weather

    Science.gov (United States)

    Manley, J.; Chegwidden, D.; Mote, A. S.; Ledley, T. S.; Lynds, S. E.; Haddad, N.; Ellins, K.

    2016-02-01

    EarthLabs, envisioned as a national model for high school Earth or Environmental Science lab courses, is adaptable for both undergraduate middle school students. The collection includes ten online modules that combine to feature a global view of our planet as a dynamic, interconnected system, by engaging learners in extended investigations. EarthLabs support state and national guidelines, including the NGSS, for science content. Four modules directly guide students to discover vital aspects of the oceans while five other modules incorporate ocean sciences in order to complete an understanding of Earth's climate system. Students gain a broad perspective on the key role oceans play in fishing industry, droughts, coral reefs, hurricanes, the carbon cycle, as well as life on land and in the seas to drive our changing climate by interacting with scientific research data, manipulating satellite imagery, numerical data, computer visualizations, experiments, and video tutorials. Students explore Earth system processes and build quantitative skills that enable them to objectively evaluate scientific findings for themselves as they move through ordered sequences that guide the learning. As a robust collection, EarthLabs modules engage students in extended, rigorous investigations allowing a deeper understanding of the ocean, climate and weather. This presentation provides an overview of the ten curriculum modules that comprise the EarthLabs collection developed by TERC and found at http://serc.carleton.edu/earthlabs/index.html. Evaluation data on the effectiveness and use in secondary education classrooms will be summarized.

  1. Application of numerical analysis techniques to eddy current testing for steam generator tubes

    International Nuclear Information System (INIS)

    Morimoto, Kazuo; Satake, Koji; Araki, Yasui; Morimura, Koichi; Tanaka, Michio; Shimizu, Naoya; Iwahashi, Yoichi

    1994-01-01

    This paper describes the application of numerical analysis to eddy current testing (ECT) for steam generator tubes. A symmetrical and three-dimensional sinusoidal steady state eddy current analysis code was developed. This code is formulated by future element method-boundary element method coupling techniques, in order not to regenerate the mesh data in the tube domain at every movement of the probe. The calculations were carried out under various conditions including those for various probe types, defect orientations and so on. Compared with the experimental data, it was shown that it is feasible to apply this code to actual use. Furthermore, we have developed a total eddy current analysis system which consists of an ECT calculation code, an automatic mesh generator for analysis, a database and display software for calculated results. ((orig.))

  2. ASTM Committee C28: International Standards for Properties and Performance of Advanced Ceramics-Three Decades of High-Quality, Technically-Rigorous Normalization

    Science.gov (United States)

    Jenkins, Michael G.; Salem, Jonathan A.

    2016-01-01

    Physical and mechanical properties and performance of advanced ceramics and glasses are difficult to measure correctly without the proper techniques. For over three decades, ASTM Committee C28 on Advanced Ceramics, has developed high-quality, technically-rigorous, full-consensus standards (e.g., test methods, practices, guides, terminology) to measure properties and performance of monolithic and composite ceramics that may be applied to glasses in some cases. These standards contain testing particulars for many mechanical, physical, thermal, properties and performance of these materials. As a result these standards are used to generate accurate, reliable, repeatable and complete data. Within Committee C28, users, producers, researchers, designers, academicians, etc. have written, continually updated, and validated through round-robin test programs, 50 standards since the Committee's founding in 1986. This paper provides a detailed retrospective of the 30 years of ASTM Committee C28 including a graphical pictogram listing of C28 standards along with examples of the tangible benefits of standards for advanced ceramics to demonstrate their practical applications.

  3. ASTM Committee C28: International Standards for Properties and Performance of Advanced Ceramics, Three Decades of High-quality, Technically-rigorous Normalization

    Science.gov (United States)

    Jenkins, Michael G.; Salem, Jonathan A.

    2016-01-01

    Physical and mechanical properties and performance of advanced ceramics and glasses are difficult to measure correctly without the proper techniques. For over three decades, ASTM Committee C28 on Advanced Ceramics, has developed high quality, rigorous, full-consensus standards (e.g., test methods, practices, guides, terminology) to measure properties and performance of monolithic and composite ceramics that may be applied to glasses in some cases. These standards testing particulars for many mechanical, physical, thermal, properties and performance of these materials. As a result these standards provide accurate, reliable, repeatable and complete data. Within Committee C28 users, producers, researchers, designers, academicians, etc. have written, continually updated, and validated through round-robin test programs, nearly 50 standards since the Committees founding in 1986. This paper provides a retrospective review of the 30 years of ASTM Committee C28 including a graphical pictogram listing of C28 standards along with examples of the tangible benefits of advanced ceramics standards to demonstrate their practical applications.

  4. Rigorous time slicing approach to Feynman path integrals

    CERN Document Server

    Fujiwara, Daisuke

    2017-01-01

    This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...

  5. Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies

    Directory of Open Access Journals (Sweden)

    Feng Xiao-Jiang

    2008-10-01

    Full Text Available Abstract Background The analysis of large-scale data sets via clustering techniques is utilized in a number of applications. Biclustering in particular has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Biclustering algorithms also have important applications in sample classification where, for instance, tissue samples can be classified as cancerous or normal. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the "best" grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. Results In this article, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. Cluster boundaries in one dimension are used to partition and re-order the other dimensions of the corresponding submatrices to generate biclusters. The performance of OREO is tested on (a metabolite concentration data, (b an image reconstruction matrix, (c synthetic data with implanted biclusters, and gene expression data for (d colon cancer data, (e breast cancer data, as well as (f yeast segregant data to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. Conclusion We demonstrate that this rigorous global optimization method for biclustering produces clusters with more insightful groupings of similar entities, such as genes or metabolites sharing common functions, than other clustering and biclustering algorithms and can reconstruct underlying fundamental patterns in the data for several distinct sets of data matrices arising

  6. Quantum nature of the big bang: An analytical and numerical investigation

    International Nuclear Information System (INIS)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-01-01

    Analytical and numerical methods are developed to analyze the quantum nature of the big bang in the setting of loop quantum cosmology. They enable one to explore the effects of quantum geometry both on the gravitational and matter sectors and significantly extend the known results on the resolution of the big bang singularity. Specifically, the following results are established for the homogeneous isotropic model with a massless scalar field: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the 'emergent time' idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime. Our constructions also provide a conceptual framework and technical tools which can be used in more general models. In this sense, they provide foundations for analyzing physical issues associated with the Planck regime of loop quantum cosmology as a whole

  7. A geometric framework for evaluating rare variant tests of association.

    Science.gov (United States)

    Liu, Keli; Fast, Shannon; Zawistowski, Matthew; Tintle, Nathan L

    2013-05-01

    The wave of next-generation sequencing data has arrived. However, many questions still remain about how to best analyze sequence data, particularly the contribution of rare genetic variants to human disease. Numerous statistical methods have been proposed to aggregate association signals across multiple rare variant sites in an effort to increase statistical power; however, the precise relation between the tests is often not well understood. We present a geometric representation for rare variant data in which rare allele counts in case and control samples are treated as vectors in Euclidean space. The geometric framework facilitates a rigorous classification of existing rare variant tests into two broad categories: tests for a difference in the lengths of the case and control vectors, and joint tests for a difference in either the lengths or angles of the two vectors. We demonstrate that genetic architecture of a trait, including the number and frequency of risk alleles, directly relates to the behavior of the length and joint tests. Hence, the geometric framework allows prediction of which tests will perform best under different disease models. Furthermore, the structure of the geometric framework immediately suggests additional classes and types of rare variant tests. We consider two general classes of tests which show robustness to noncausal and protective variants. The geometric framework introduces a novel and unique method to assess current rare variant methodology and provides guidelines for both applied and theoretical researchers. © 2013 Wiley Periodicals, Inc.

  8. Modular Electric Propulsion Test Bed Aircraft, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — An all electric aircraft test bed is proposed to provide a dedicated development environment for the rigorous study and advancement of electrically powered aircraft....

  9. Numerical simulation of effect of laser nonuniformity in interior interface

    International Nuclear Information System (INIS)

    Yu Xiaojin; Wu Junfeng; Ye Wenhua

    2007-01-01

    Using the LARED-S code and referring to the NIF direct-drive DT ignition target, the effect of laser nonuniformity on the interior interface in direct-drive spherical implosion with high convergence ratio was numerically studied. The two-dimensional results show that the implosion with high convergence ratio is sensitive to the nonuniformity of driving laser, and the growth of hydrodynamic instability on interior interface destroys the symmetric-drive and reduces the volume of central hot spot observably. Taking the limit that perturbation amplitude is equal to 1/3 radius of central hot spot, the simulation also gives that the requirements for the laser uniformity for different mode number(less than 12) on simple physical model are between 2.5% -0.25%, and the modes between 8-10 have the most rigorous requirement which is about 0.25%. (authors)

  10. Numerical Limit Analysis of Precast Concrete Structures

    DEFF Research Database (Denmark)

    Herfelt, Morten Andersen

    Precast concrete elements are widely used in the construction industry as they provide a number of advantages over the conventional in-situ cast concrete structures. Joints cast on the construction site are needed to connect the precast elements, which poses several challenges. Moreover, the curr...... problems are solved efficiently using state-of-the-art solvers. It is concluded that the framework and developed joint models have the potential to enable efficient design of precast concrete structures in the near future......., the current practice is to design the joints as the weakest part of the structure, which makes analysis of the ultimate limit state behaviour by general purpose software difficult and inaccurate. Manual methods of analysis based on limit analysis have been used for several decades. The methods provide...... of the ultimate limit state behaviour. This thesis introduces a framework based on finite element limit analysis, a numerical method based on the same extremum principles as the manual limit analysis. The framework allows for efficient analysis and design in a rigorous manner by use of mathematical optimisation...

  11. Rigorous modelling of light's intensity angular-profile in Abbe refractometers with absorbing homogeneous fluids

    International Nuclear Information System (INIS)

    García-Valenzuela, A; Contreras-Tello, H; Márquez-Islas, R; Sánchez-Pérez, C

    2013-01-01

    We derive an optical model for the light intensity distribution around the critical angle in a standard Abbe refractometer when used on absorbing homogenous fluids. The model is developed using rigorous electromagnetic optics. The obtained formula is very simple and can be used suitably in the analysis and design of optical sensors relying on Abbe type refractometry.

  12. Modular Electric Propulsion Test Bed Aircraft, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid electric aircraft simulation system and test bed is proposed to provide a dedicated development environment for the rigorous study and advancement of hybrid...

  13. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  14. Quality of nuchal translucency measurements correlates with broader aspects of program rigor and culture of excellence.

    Science.gov (United States)

    Evans, Mark I; Krantz, David A; Hallahan, Terrence; Sherwin, John; Britt, David W

    2013-01-01

    To determine if nuchal translucency (NT) quality correlates with the extent to which clinics vary in rigor and quality control. We correlated NT performance quality (bias and precision) of 246,000 patients with two alternative measures of clinic culture - % of cases for whom nasal bone (NB) measurements were performed and % of requisitions correctly filled for race-ethnicity and weight. When requisition errors occurred in 5% (33%), the curve lowered to 0.93 MoM (p 90%, MoM was 0.99 compared to those quality exists independent of individual variation in NT quality, and two divergent indices of program rigor are associated with NT quality. Quality control must be program wide, and to effect continued improvement in the quality of NT results across time, the cultures of clinics must become a target for intervention. Copyright © 2013 S. Karger AG, Basel.

  15. Numerical comparison of improved methods of testing in contingency tables with small frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Sugiura, Nariaki; Otake, Masanori

    1968-11-14

    The significance levels of various tests for a general c x k contingency table are usually given by large sample theory. But they are not accurate for the one having small frequencies. In this paper, a numerical evaluation was made to determine how good the approximation of significance level is for various improved tests that have been developed by Nass, Yoshimura, Gart, etc. for c x k contingency table with small frequencies in some of cells. For this purpose we compared the significance levels of the various approximate methods (i) with those of one-sided tail defined in terms of exact probabilities for given marginals in 2 x 2 table; (ii) with those of exact probabilities accumulated in the order of magnitude of Chi/sup 2/ statistic or likelihood ratio (=LR) statistic in 2 x 3 table mentioned by Yates. In 2 x 2 table it is well known that Yates' correction gives satisfactory result for small cell frequencies and the other methods that we have not referred here, can be considered if we devote our attention only to 2 x 2 or 2 x k table. But we are mainly interested in comparing the methods that are applicable to a general c x k table. It appears that such a comparison for the various improved methods in the same example has not been made explicitly, even though these tests are frequently used in biological and medical research. 9 references, 6 figures, 6 tables.

  16. Comparison of rigorous modelling of different structure profiles on photomasks for quantitative linewidth measurements by means of UV- or DUV-optical microscopy

    Science.gov (United States)

    Ehret, Gerd; Bodermann, Bernd; Woehler, Martin

    2007-06-01

    The optical microscopy is an important instrument for dimensional characterisation or calibration of micro- and nanostructures, e.g. chrome structures on photomasks. In comparison to scanning electron microscopy (possible contamination of the sample) and atomic force microscopy (slow, risk of damage) optical microscopy is a fast and non destructive metrology method. The precise quantitative determination of the linewidth from the microscope image is, however, only possible by knowledge of the geometry of the structures and their consideration in the optical modelling. We compared two different rigorous model approaches, the Rigorous Coupled Wave Analysis (RCWA) and the Finite Elements Method (FEM) for modelling of structures with different edge angles, linewidths, line to space ratios and polarisations. The RCWA method can adapt inclined edges profiles only by a staircase approximation leading to increased modelling errors of the RCWA method. Even today's sophisticated rigorous methods still show problems with TM-polarisation. Therefore both rigorous methods are compared in terms of their convergence for TE and TM- polarisation. Beyond that also the influence of typical illumination wavelengths (365 nm, 248 nm and 193 nm) on the microscope images and their contribution to the measuring uncertainty budget will be discussed.

  17. From everyday communicative figurations to rigorous audience news repertoires

    DEFF Research Database (Denmark)

    Kobbernagel, Christian; Schrøder, Kim Christian

    2016-01-01

    In the last couple of decades there has been an unprecedented explosion of news media platforms and formats, as a succession of digital and social media have joined the ranks of legacy media. We live in a ‘hybrid media system’ (Chadwick, 2013), in which people build their cross-media news...... repertoires from the ensemble of old and new media available. This article presents an innovative mixed-method approach with considerable explanatory power to the exploration of patterns of news media consumption. This approach tailors Q-methodology in the direction of a qualitative study of news consumption......, in which a card sorting exercise serves to translate the participants’ news media preferences into a form that enables the researcher to undertake a rigorous factor-analytical construction of their news consumption repertoires. This interpretive, factor-analytical procedure, which results in the building...

  18. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  19. 42 CFR 493.901 - Approval of proficiency testing programs.

    Science.gov (United States)

    2010-10-01

    ...) Distribute the samples, using rigorous quality control to assure that samples mimic actual patient specimens... gynecologic cytology and on individual laboratory performance on testing events, cumulative reports and scores...

  20. Rigorous lower bound on the dynamic critical exponent of some multilevel Swendsen-Wang algorithms

    International Nuclear Information System (INIS)

    Li, X.; Sokal, A.D.

    1991-01-01

    We prove the rigorous lower bound z exp ≥α/ν for the dynamic critical exponent of a broad class of multilevel (or ''multigrid'') variants of the Swendsen-Wang algorithm. This proves that such algorithms do suffer from critical slowing down. We conjecture that such algorithms in fact lie in the same dynamic universality class as the stanard Swendsen-Wang algorithm

  1. Beyond the RCT: Integrating Rigor and Relevance to Evaluate the Outcomes of Domestic Violence Programs

    Science.gov (United States)

    Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M.

    2018-01-01

    Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…

  2. Formulations by surface integral equations for numerical simulation of non-destructive testing by eddy currents

    International Nuclear Information System (INIS)

    Vigneron, Audrey

    2015-01-01

    The thesis addresses the numerical simulation of non-destructive testing (NDT) using eddy currents, and more precisely the computation of induced electromagnetic fields by a transmitter sensor in a healthy part. This calculation is the first step of the modeling of a complete control process in the CIVA software platform developed at CEA LIST. Currently, models integrated in CIVA are restricted to canonical (modal computation) or axially-symmetric geometries. The need for more diverse and complex configurations requires the introduction of new numerical modeling tools. In practice the sensor may be composed of elements with different shapes and physical properties. The inspected parts are conductive and may contain dielectric or magnetic elements. Due to the cohabitation of different materials in one configuration, different regimes (static, quasi-static or dynamic) may coexist. Under the assumption of linear, isotropic and piecewise homogeneous material properties, the surface integral equation (SIE) approach allows to reduce a volume-based problem to an equivalent surface-based problem. However, the usual SIE formulations for the Maxwell's problem generally suffer from numerical noise in asymptotic situations, and especially at low frequencies. The objective of this study is to determine a version that is stable for a range of physical parameters typical of eddy-current NDT applications. In this context, a block-iterative scheme based on a physical decomposition is proposed for the computation of primary fields. This scheme is accurate and well-conditioned. An asymptotic study of the integral Maxwell's problem at low frequencies is also performed, allowing to establish the eddy-current integral problem as an asymptotic case of the corresponding Maxwell problem. (author) [fr

  3. "Snow White" Coating Protects SpaceX Dragon's Trunk Against Rigors of Space

    Science.gov (United States)

    McMahan, Tracy

    2013-01-01

    He described it as "snow white." But NASA astronaut Don Pettit was not referring to the popular children's fairy tale. Rather, he was talking about the white coating of the Space Exploration Technologies Corp. (SpaceX) Dragon spacecraft that reflected from the International Space Station s light. As it approached the station for the first time in May 2012, the Dragon s trunk might have been described as the "fairest of them all," for its pristine coating, allowing Pettit to clearly see to maneuver the robotic arm to grab the Dragon for a successful nighttime berthing. This protective thermal control coating, developed by Alion Science and Technology Corp., based in McLean, Va., made its bright appearance again with the March 1 launch of SpaceX's second commercial resupply mission. Named Z-93C55, the coating was applied to the cargo portion of the Dragon to protect it from the rigors of space. "For decades, Alion has produced coatings to protect against the rigors of space," said Michael Kenny, senior chemist with Alion. "As space missions evolved, there was a growing need to dissipate electrical charges that build up on the exteriors of spacecraft, or there could be damage to the spacecraft s electronics. Alion's research led us to develop materials that would meet this goal while also providing thermal controls. The outcome of this research was Alion's proprietary Z-93C55 coating."

  4. Numerical simulation of carbon dioxide effects in geothermal reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Moya, S.L.; Iglesias, E.R. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1995-03-01

    We developed and coded a new equation of state (EOS) for water-carbon dioxide mixtures and coupled it to the TOUGH numerical simulator. This EOS is valid up to 350{degrees}C and 500 bar. Unlike previous thermodynamical models, it rigorously considers the non-ideal behavior of both components in the gaseous mixture and formally includes the effect of the compressibility of the liquid phase. We refer to the coupling of this EOS with TOUGH as TOUGH-DIOX. To complement this enhancement of TOUGH, we added indexed output files for easy selection and interpretation of results. We validated TOUGH-DIOX against published results. Furthermore we used TOUGH-DIOX to explore and compare mass and energy inflow performance relationships of geothermal wells with/without carbon dioxide (CO{sub 2}). Our results include the effects of a broad range of fluid and formation properties, initial conditions and history of reservoir production. This work contributes with generalized dimensionless inflow performance relationships appropriate for geothermal use.

  5. Study design elements for rigorous quasi-experimental comparative effectiveness research.

    Science.gov (United States)

    Maciejewski, Matthew L; Curtis, Lesley H; Dowd, Bryan

    2013-03-01

    Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.

  6. Effects of well-boat transportation on the muscle pH and onset of rigor mortis in Atlantic salmon.

    Science.gov (United States)

    Gatica, M C; Monti, G; Gallo, C; Knowles, T G; Warriss, P D

    2008-07-26

    During the transport of salmon (Salmo salar), in a well-boat, 10 fish were sampled at each of six stages: in cages after crowding at the farm (stage 1), in the well-boat after loading (stage 2), in the well-boat after eight hours transport and before unloading (stage 3), in the resting cages immediately after finishing unloading (stage 4), after 24 hours resting in cages, (stage 5) and in the processing plant after pumping from the resting cages (stage 6). The water in the well-boat was at ambient temperature with recirculation to the sea. At each stage the fish were stunned percussively and bled by gill cutting. Immediately after death, and then every three hours for 18 hours, the muscle pH and rigor index of the fish were measured. At successive stages the initial muscle pH of the fish decreased, except for a slight gain in stage 5, after they had been rested for 24 hours. The lowest initial muscle pH was observed at stage 6. The fishes' rigor index showed that rigor developed more quickly at each successive stage, except for a slight decrease in rate at stage 5, attributable to the recovery of muscle reserves.

  7. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  8. Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames

    Science.gov (United States)

    Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.

    2017-12-01

    Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.

  9. Dynamics of harmonically-confined systems: Some rigorous results

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhigang, E-mail: zwu@physics.queensu.ca; Zaremba, Eugene, E-mail: zaremba@sparky.phy.queensu.ca

    2014-03-15

    In this paper we consider the dynamics of harmonically-confined atomic gases. We present various general results which are independent of particle statistics, interatomic interactions and dimensionality. Of particular interest is the response of the system to external perturbations which can be either static or dynamic in nature. We prove an extended Harmonic Potential Theorem which is useful in determining the damping of the centre of mass motion when the system is prepared initially in a highly nonequilibrium state. We also study the response of the gas to a dynamic external potential whose position is made to oscillate sinusoidally in a given direction. We show in this case that either the energy absorption rate or the centre of mass dynamics can serve as a probe of the optical conductivity of the system. -- Highlights: •We derive various rigorous results on the dynamics of harmonically-confined atomic gases. •We derive an extension of the Harmonic Potential Theorem. •We demonstrate the link between the energy absorption rate in a harmonically-confined system and the optical conductivity.

  10. Rigorous theory of molecular orientational nonlinear optics

    International Nuclear Information System (INIS)

    Kwak, Chong Hoon; Kim, Gun Yeup

    2015-01-01

    Classical statistical mechanics of the molecular optics theory proposed by Buckingham [A. D. Buckingham and J. A. Pople, Proc. Phys. Soc. A 68, 905 (1955)] has been extended to describe the field induced molecular orientational polarization effects on nonlinear optics. In this paper, we present the generalized molecular orientational nonlinear optical processes (MONLO) through the calculation of the classical orientational averaging using the Boltzmann type time-averaged orientational interaction energy in the randomly oriented molecular system under the influence of applied electric fields. The focal points of the calculation are (1) the derivation of rigorous tensorial components of the effective molecular hyperpolarizabilities, (2) the molecular orientational polarizations and the electronic polarizations including the well-known third-order dc polarization, dc electric field induced Kerr effect (dc Kerr effect), optical Kerr effect (OKE), dc electric field induced second harmonic generation (EFISH), degenerate four wave mixing (DFWM) and third harmonic generation (THG). We also present some of the new predictive MONLO processes. For second-order MONLO, second-order optical rectification (SOR), Pockels effect and difference frequency generation (DFG) are described in terms of the anisotropic coefficients of first hyperpolarizability. And, for third-order MONLO, third-order optical rectification (TOR), dc electric field induced difference frequency generation (EFIDFG) and pump-probe transmission are presented

  11. RIGOROUS GEOREFERENCING OF ALSAT-2A PANCHROMATIC AND MULTISPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    I. Boukerch

    2013-04-01

    Full Text Available The exploitation of the full geometric capabilities of the High-Resolution Satellite Imagery (HRSI, require the development of an appropriate sensor orientation model. Several authors studied this problem; generally we have two categories of geometric models: physical and empirical models. Based on the analysis of the metadata provided with ALSAT-2A, a rigorous pushbroom camera model can be developed. This model has been successfully applied to many very high resolution imagery systems. The relation between the image and ground coordinates by the time dependant collinearity involving many coordinates systems has been tested. The interior orientation parameters must be integrated in the model, the interior parameters can be estimated from the viewing angles corresponding to the pointing directions of any detector, these values are derived from cubic polynomials provided in the metadata. The developed model integrates all the necessary elements with 33 unknown. All the approximate values of the 33 unknowns parameters may be derived from the informations contained in the metadata files provided with the imagery technical specifications or they are simply fixed to zero, so the condition equation is linearized and solved using SVD in a least square sense in order to correct the initial values using a suitable number of well-distributed GCPs. Using Alsat-2A images over the town of Toulouse in the south west of France, three experiments are done. The first is about 2D accuracy analysis using several sets of parameters. The second is about GCPs number and distribution. The third experiment is about georeferencing multispectral image by applying the model calculated from panchromatic image.

  12. Community historians and the dilemma of rigor vs relevance : A comment on Danziger and van Rappard

    NARCIS (Netherlands)

    Dehue, Trudy

    1998-01-01

    Since the transition from finalism to contextualism, the history of science seems to be caught up in a basic dilemma. Many historians fear that with the new contextualist standards of rigorous historiography, historical research can no longer be relevant to working scientists themselves. The present

  13. Numerical simulations of granular dynamics: I. Hard-sphere discrete element method and tests

    Science.gov (United States)

    Richardson, Derek C.; Walsh, Kevin J.; Murdoch, Naomi; Michel, Patrick

    2011-03-01

    We present a new particle-based (discrete element) numerical method for the simulation of granular dynamics, with application to motions of particles on small solar system body and planetary surfaces. The method employs the parallel N-body tree code pkdgrav to search for collisions and compute particle trajectories. Collisions are treated as instantaneous point-contact events between rigid spheres. Particle confinement is achieved by combining arbitrary combinations of four provided wall primitives, namely infinite plane, finite disk, infinite cylinder, and finite cylinder, and degenerate cases of these. Various wall movements, including translation, oscillation, and rotation, are supported. We provide full derivations of collision prediction and resolution equations for all geometries and motions. Several tests of the method are described, including a model granular “atmosphere” that achieves correct energy equipartition, and a series of tumbler simulations that show the expected transition from tumbling to centrifuging as a function of rotation rate.

  14. Numerical test for single concrete armour layer on breakwaters

    OpenAIRE

    Anastasaki, E; Latham, J-P; Xiang, J

    2016-01-01

    The ability of concrete armour units for breakwaters to interlock and form an integral single layer is important for withstanding severe wave conditions. In reality, displacements take place under wave loading, whether they are small and insignificant or large and representing serious structural damage. In this work, a code that combines finite- and discrete-element methods which can simulate motion and interaction among units was used to conduct a numerical investigation. Various concrete ar...

  15. An integrated numerical protection system (SPIN)

    International Nuclear Information System (INIS)

    Savornin, J.L.; Bouchet, J.M.; Furet, J.L.; Jover, P.; Sala, A.

    1978-01-01

    Developments in technology have now made it possible to perform more sophisticated protection functions which follow more closely the physical phenomena to be monitored. For this reason the Commissariat a l'energie atomique, Merlin-Gerin, Cerci and Framatome have embarked on the joint development of an Integrated Numerical Protection System (SPIN) which will fulfil this objective and will improve the safety and availability of power stations. The system described involves the use of programmed numerical techniques and a structure based on multiprocessors. The architecture has a redundancy of four. Throughout the development of the project the validity of the studies was confirmed by experiments. A first numerical model of a protection function was tested in the laboratory and is now in operation in a power station. A set of models was then introduced for checking the main components of the equipment finally chosen prior to building and testing a prototype. (author)

  16. Numerical Analysis of Through Transmission Pulsed Eddy Current Testing and Effects of Pulse Width Variation

    International Nuclear Information System (INIS)

    Shin, Young Kil; Choi, Dong Myung

    2007-01-01

    By using numerical analysis methods, through transmission type pulsed eddy current (PEC) testing is modeled and PEC signal responses due to varying material conductivity, permeability, thickness, lift-off and pulse width are investigated. Results show that the peak amplitude of PEC signal gets reduced and the time to reach the peak amplitude is increased as the material conductivity, permeability, and specimen thickness increase. Also, they indicate that the pulse width needs to be shorter when evaluating the material conductivity and the plate thickness using the peak amplitude, and when the pulse width is long, the peak time is found to be more useful. Other results related to lift-off variation are reported as well

  17. Numerical modeling and experimental validation of thermoplastic composites induction welding

    Science.gov (United States)

    Palmieri, Barbara; Nele, Luigi; Galise, Francesco

    2018-05-01

    In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.

  18. Numerical methods

    CERN Document Server

    Dahlquist, Germund

    1974-01-01

    ""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.

  19. Robustness of numerical TIG welding simulation of 3D structures in stainless steel 316L

    International Nuclear Information System (INIS)

    El-Ahmar, W.

    2007-04-01

    The numerical welding simulation is considered to be one of those mechanical problems that have the great level of nonlinearity and which requires a good knowledge in various scientific fields. The 'Robustness Analysis' is a suitable tool to control the quality and guarantee the reliability of numerical welding results. The robustness of a numerical simulation of welding is related to the sensitivity of the modelling assumptions on the input parameters. A simulation is known as robust if the result that it produces is not very sensitive to uncertainties of the input data. The term 'Robust' was coined in statistics by G.E.P. Box in 1953. Various definitions of greater or lesser mathematical rigor are possible for the term, but in general, referring to a statistical estimator, it means 'insensitive to small deviation from the idealized assumptions for which the estimator is optimized. In order to evaluate the robustness of numerical welding simulation, sensitivity analyses on thermomechanical models and parameters have been conducted. At the first step, we research a reference solution which gives the best agreement with the thermal and mechanical experimental results. The second step consists in determining through numerical simulations which parameters have the largest influence on residual stresses induced by the welding process. The residual stresses were predicted using finite element method performed with Code-Aster of EDF and SYSWELD of ESI-GROUP. An analysis of robustness can prove to be heavy and expensive making it an unjustifiable route. However, only with development such tool of analysis can predictive methods become a useful tool for industry. (author)

  20. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    Science.gov (United States)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

  1. Experimental and numerical study of a modified ASTM C633 adhesion test for strongly-bonded coatings

    Energy Technology Data Exchange (ETDEWEB)

    Bernardie, Raphaëlle; Berkouch, Reda; Valette, Stéphane; Absi, Joseph; Lefort, Pierre [University of Limoges, Limoges Cedex (France)

    2017-07-15

    When coatings are strongly bonded to their substrates it is often difficult to measure the adhesion values. The proposed method, which is suggested naming “silver print test”, consists in covering the central part of the samples with a thin layer of silver paint, before coating. The process used for testing this new method was the Air plasma spraying (APS), and the materials used were alumina coatings on C35 steel substrates, previously pre-oxidized in CO{sub 2}. The silver painted area was composed of small grains that did not oxidize but that significantly sintered during the APS process. The silver layer reduced the surface where the coating was linked to the substrate, which allowed its debonding, using the classical adhesion test ASTM C633-13, while the direct use of this test (without silver painting) led to ruptures inside the glue used in this test. The numerical modelling, based on the finite element method with the ABAQUS software, provided results in good agreement with the experimental measurements. This concordance validated the used method and allowed accessing to the values of adherence when the experimental test ASTM C633-13 failed, because of ruptures in the glue. After standardization, the “silver print test” might be used for other kinds of deposition methods, such as PVD, CVD, PECVD.

  2. Rigor mortis development at elevated temperatures induces pale exudative turkey meat characteristics.

    Science.gov (United States)

    McKee, S R; Sams, A R

    1998-01-01

    Development of rigor mortis at elevated post-mortem temperatures may contribute to turkey meat characteristics that are similar to those found in pale, soft, exudative pork. To evaluate this effect, 36 Nicholas tom turkeys were processed at 19 wk of age and placed in water at 40, 20, and 0 C immediately after evisceration. Pectoralis muscle samples were taken at 15 min, 30 min, 1 h, 2 h, and 4 h post-mortem and analyzed for R-value (an indirect measure of adenosine triphosphate), glycogen, pH, color, and sarcomere length. At 4 h, the remaining intact Pectoralis muscle was harvested, and aged on ice 23 h, and analyzed for drip loss, cook loss, shear values, and sarcomere length. By 15 min post-mortem, the 40 C treatment had higher R-values, which persisted through 4 h. By 1 h, the 40 C treatment pH and glycogen levels were lower than the 0 C treatment; however, they did not differ from those of the 20 C treatment. Increased L* values indicated that color became more pale by 2 h post-mortem in the 40 C treatment when compared to the 20 and 0 C treatments. Drip loss, cook loss, and shear value were increased whereas sarcomere lengths were decreased as a result of the 40 C treatment. These findings suggested that elevated post-mortem temperatures during processing resulted in acceleration of rigor mortis and biochemical changes in the muscle that produced pale, exudative meat characteristics in turkey.

  3. The contributions of numerical acuity and non-numerical stimulus features to the development of the number sense and symbolic math achievement.

    Science.gov (United States)

    Starr, Ariel; DeWind, Nicholas K; Brannon, Elizabeth M

    2017-11-01

    Numerical acuity, frequently measured by a Weber fraction derived from nonsymbolic numerical comparison judgments, has been shown to be predictive of mathematical ability. However, recent findings suggest that stimulus controls in these tasks are often insufficiently implemented, and the proposal has been made that alternative visual features or inhibitory control capacities may actually explain this relation. Here, we use a novel mathematical algorithm to parse the relative influence of numerosity from other visual features in nonsymbolic numerical discrimination and to examine the strength of the relations between each of these variables, including inhibitory control, and mathematical ability. We examined these questions developmentally by testing 4-year-old children, 6-year-old children, and adults with a nonsymbolic numerical comparison task, a symbolic math assessment, and a test of inhibitory control. We found that the influence of non-numerical features decreased significantly over development but that numerosity was a primary determinate of decision making at all ages. In addition, numerical acuity was a stronger predictor of math achievement than either non-numerical bias or inhibitory control in children. These results suggest that the ability to selectively attend to number contributes to the maturation of the number sense and that numerical acuity, independent of inhibitory control, contributes to math achievement in early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A new numerical scheme for non uniform homogenized problems: Application to the non linear Reynolds compressible equation

    Directory of Open Access Journals (Sweden)

    Buscaglia Gustavo C.

    2001-01-01

    Full Text Available A new numerical approach is proposed to alleviate the computational cost of solving non-linear non-uniform homogenized problems. The article details the application of the proposed approach to lubrication problems with roughness effects. The method is based on a two-parameter Taylor expansion of the implicit dependence of the homogenized coefficients on the average pressure and on the local value of the air gap thickness. A fourth-order Taylor expansion provides an approximation that is accurate enough to be used in the global problem solution instead of the exact dependence, without introducing significant errors. In this way, when solving the global problem, the solution of local problems is simply replaced by the evaluation of a polynomial. Moreover, the method leads naturally to Newton-Raphson nonlinear iterations, that further reduce the cost. The overall efficiency of the numerical methodology makes it feasible to apply rigorous homogenization techniques in the analysis of compressible fluid contact considering roughness effects. Previous work makes use of an heuristic averaging technique. Numerical comparison proves that homogenization-based methods are superior when the roughness is strongly anisotropic and not aligned with the flow direction.

  5. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    Science.gov (United States)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  6. Evaluation of the base/subgrade soil under repeated loading : phase I--laboratory testing and numerical modeling of geogrid reinforced bases in flexible pavement.

    Science.gov (United States)

    2009-10-01

    This report documents the results of a study that was conducted to characterize the behavior of geogrid reinforced base : course materials. The research was conducted through an experimental testing and numerical modeling programs. The : experimental...

  7. A fast numerical test of multivariate polynomial positiveness with applications

    Czech Academy of Sciences Publication Activity Database

    Augusta, Petr; Augustová, Petra

    2018-01-01

    Roč. 54, č. 2 (2018), s. 289-303 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : stability * multidimensional systems * positive polynomials * fast Fourier transforms * numerical algorithm Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems Impact factor: 0.379, year: 2016 https://www.kybernetika.cz/content/2018/2/289/paper.pdf

  8. A new method for deriving rigorous results on ππ scattering

    International Nuclear Information System (INIS)

    Caprini, I.; Dita, P.

    1979-06-01

    We develop a new approach to the problem of constraining the ππ scattering amplitudes by means of the axiomatically proved properties of unitarity, analyticity and crossing symmetry. The method is based on the solution of an extremal problem on a convex set of analytic functions and provides a global description of the domain of values taken by any finite number of partial waves at an arbitrary set of unphysical energies, compatible with unitarity, the bounds at complex energies derived from generalized dispersion relations and the crossing integral relations. From this doma domain we obtain new absolute bounds for the amplitudes as well as rigorous correlations between the values of various partial waves. (author)

  9. Models and numerical methods for time- and energy-dependent particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Olbrant, Edgar

    2012-04-13

    Particles passing through a medium can be described by the Boltzmann transport equation. Therein, all physical interactions of particles with matter are given by cross sections. We compare different analytical models of cross sections for photons, electrons and protons to state-of-the-art databases. The large dimensionality of the transport equation and its integro-differential form make it analytically difficult and computationally costly to solve. In this work, we focus on the following approximative models to the linear Boltzmann equation: (i) the time-dependent simplified P{sub N} (SP{sub N}) equations, (ii) the M{sub 1} model derived from entropy-based closures and (iii) a new perturbed M{sub 1} model derived from a perturbative entropy closure. In particular, an asymptotic analysis for SP{sub N} equations is presented and confirmed by numerical computations in 2D. Moreover, we design an explicit Runge-Kutta discontinuous Galerkin (RKDG) method to the M{sub 1} model of radiative transfer in slab geometry and construct a scheme ensuring the realizability of the moment variables. Among other things, M{sub 1} numerical results are compared with an analytical solution in a Riemann problem and the Marshak wave problem is considered. Additionally, we rigorously derive a new hierarchy of kinetic moment models in the context of grey photon transport in one spatial dimension. For the perturbed M{sub 1} model, we present numerical results known as the two beam instability or the analytical benchmark due to Su and Olson and compare them to the standard M{sub 1} as well as transport solutions.

  10. Numerical approximation of a binary fluid-surfactant phase field model of two-phase incompressible flow

    KAUST Repository

    Zhu, Guangpu

    2018-04-17

    In this paper, we consider the numerical approximation of a binary fluid-surfactant phase field model of two-phase incompressible flow. The nonlinearly coupled model consists of two Cahn-Hilliard type equations and incompressible Navier-Stokes equations. Using the Invariant Energy Quadratization (IEQ) approach, the governing system is transformed into an equivalent form, which allows the nonlinear potentials to be treated efficiently and semi-explicitly. we construct a first and a second-order time marching schemes, which are extremely efficient and easy-to-implement, for the transformed governing system. At each time step, the schemes involve solving a sequence of linear elliptic equations, and computations of phase variables, velocity and pressure are totally decoupled. We further establish a rigorous proof of unconditional energy stability for the semi-implicit schemes. Numerical results in both two and three dimensions are obtained, which demonstrate that the proposed schemes are accurate, efficient and unconditionally energy stable. Using our schemes, we investigate the effect of surfactants on droplet deformation and collision under a shear flow. The increase of surfactant concentration can enhance droplet deformation and inhibit droplet coalescence.

  11. Numerical approximation of a binary fluid-surfactant phase field model of two-phase incompressible flow

    KAUST Repository

    Zhu, Guangpu; Kou, Jisheng; Sun, Shuyu; Yao, Jun; Li, Aifen

    2018-01-01

    In this paper, we consider the numerical approximation of a binary fluid-surfactant phase field model of two-phase incompressible flow. The nonlinearly coupled model consists of two Cahn-Hilliard type equations and incompressible Navier-Stokes equations. Using the Invariant Energy Quadratization (IEQ) approach, the governing system is transformed into an equivalent form, which allows the nonlinear potentials to be treated efficiently and semi-explicitly. we construct a first and a second-order time marching schemes, which are extremely efficient and easy-to-implement, for the transformed governing system. At each time step, the schemes involve solving a sequence of linear elliptic equations, and computations of phase variables, velocity and pressure are totally decoupled. We further establish a rigorous proof of unconditional energy stability for the semi-implicit schemes. Numerical results in both two and three dimensions are obtained, which demonstrate that the proposed schemes are accurate, efficient and unconditionally energy stable. Using our schemes, we investigate the effect of surfactants on droplet deformation and collision under a shear flow. The increase of surfactant concentration can enhance droplet deformation and inhibit droplet coalescence.

  12. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  13. Numerical simulation investigation on centrifugal compressor performance of turbocharger

    International Nuclear Information System (INIS)

    Li, Jie; Yin, Yuting; Li, Shuqi; Zhang, Jizhong

    2013-01-01

    In this paper, the mathematical model of the flow filed in centrifugal compressor of turbocharger was studied. Based on the theory of computational fluid dynamics (CFD), performance curves and parameter distributions of the compressor were obtained from the 3-D numerical simulation by using CFX. Meanwhile, the influences of grid number and distribution on compressor performance were investigated, and numerical calculation method was analyzed and validated, through combining with test data. The results obtained show the increase of the grid number has little influence on compressor performance while the grid number of single-passage is above 300,000. The results also show that the numerical calculation mass flow rate of compressor choke situation has a good consistent with test results, and the maximum difference of the diffuser exit pressure between simulation and experiment decrease to 3.5% with the assumption of 6 kPa additional total pressure loss at compressor inlet. The numerical simulation method in this paper can be used to predict compressor performance, and the difference of total pressure ratio between calculation and test is less than 7%, and the total-to-total efficiency also have a good consistent with test.

  14. Numerical simulation investigation on centrifugal compressor performance of turbocharger

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jie [China Iron and Steel Research Institute Group, Beijing (China); Yin, Yuting [China North Engine Research Institute, Datong (China); Li, Shuqi; Zhang, Jizhong [Science and Technology Diesel Engine Turbocharging Laboratory, Datong (China)

    2013-06-15

    In this paper, the mathematical model of the flow filed in centrifugal compressor of turbocharger was studied. Based on the theory of computational fluid dynamics (CFD), performance curves and parameter distributions of the compressor were obtained from the 3-D numerical simulation by using CFX. Meanwhile, the influences of grid number and distribution on compressor performance were investigated, and numerical calculation method was analyzed and validated, through combining with test data. The results obtained show the increase of the grid number has little influence on compressor performance while the grid number of single-passage is above 300,000. The results also show that the numerical calculation mass flow rate of compressor choke situation has a good consistent with test results, and the maximum difference of the diffuser exit pressure between simulation and experiment decrease to 3.5% with the assumption of 6 kPa additional total pressure loss at compressor inlet. The numerical simulation method in this paper can be used to predict compressor performance, and the difference of total pressure ratio between calculation and test is less than 7%, and the total-to-total efficiency also have a good consistent with test.

  15. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    Science.gov (United States)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  16. On the interpretation of double-packer tests in heterogeneous porous media: Numerical simulations using the stochastic continuum analogue

    International Nuclear Information System (INIS)

    Follin, S.

    1992-12-01

    Flow in fractured crystalline (hard) rocks is of interest in Sweden for assessing the postclosure radiological safety of a deep repository for high-level nuclear waste. For simulation of flow and mass transport in the far field different porous media concepts are often used, whereas discrete fracture/channel network concepts are often used for near-field simulations. Due to lack of data, it is generally necessary to have resort to single-hole double-packer test data for the far-field simulations, i.e., test data on a small scale are regularized in order to fit a comparatively coarser numerical discretization, which is governed by various computational constraints. In the present study the Monte Carlo method is used to investigate the relationship between the transmissivity value interpreted and the corresponding radius of influence in conjunction with single-hole double-packer tests in heterogeneous formations. The numerical flow domain is treated as a two-dimensional heterogeneous porous medium with a spatially varying diffusivity on 3 m scale. The Monte Carlo simulations demonstrate the sensitivity to the correlation range of a spatially varying diffusivity field. In contradiction to what is tacitly assumed in stochastic subsurface hydrology, the results show that the lateral support scale (e.g., the radius of influence) of transmissivity measurements in heterogeneous porous media is a random variable, which is affected by both the hydraulic and statistical characteristics. If these results are general, the traditional methods for scaling-up, assuming a constant lateral scale of support and a multi normal distribution, may lead to an underestimation of the persistence and connectivity of transmissive zones, particularly in highly heterogeneous porous media

  17. Test and Numerical Analysis for Penetration Residual Velocity of Bullet Considering Failure Strain Uncertainty of Composite Plates

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Myungseok; Lee, Minhyung [Sejong Univ., Sejong (Korea, Republic of)

    2016-03-15

    The ballistic performance data of composite materials is distributed due to material inhomogeneity. In this paper, the uncertainty in residual velocity is obtained experimentally, and a method of predicting it is established numerically for the high-speed impact of a bullet into laminated composites. First, the failure strain distribution was obtained by conducting a tensile test using 10 specimens. Next, a ballistic impact test was carried out for the impact of a fragment-simulating projectile (FSP) bullet with 4ply ([0/90]s) and 8ply ([0/90/0/90]s) glass fiber reinforced plastic (GFRP) plates. Eighteen shots were made at the same impact velocity and the residual velocities were obtained. Finally, simulations were conducted to predict the residual velocities by using the failure strain distributions that were obtained from the tensile test. For this simulation, two impact velocities were chosen at 411.7m/s (4ply) and 592.5m/s (8ply). The simulation results show that the predicted residual velocities are in close agreement with test results. Additionally, the modeling of a composite plate with layered solid elements requires less calculation time than modeling with solid elements.

  18. Numerical and experimental evaluation of masonry prisms by finite element method

    Directory of Open Access Journals (Sweden)

    C. F.R. SANTOS

    Full Text Available Abstract This work developed experimental tests and numerical models able to represent the mechanical behavior of prisms made of ordinary and high strength concrete blocks. Experimental tests of prisms were performed and a detailed micro-modeling strategy was adopted for numerical analysis. In this modeling technique, each material (block and mortar was represented by its own mechanical properties. The validation of numerical models was based on experimental results. It was found that the obtained numerical values of compressive strength and modulus of elasticity differ by 5% from the experimentally observed values. Moreover, mechanisms responsible for the rupture of the prisms were evaluated and compared to the behaviors observed in the tests and those described in the literature. Through experimental results it is possible to conclude that the numerical models have been able to represent both the mechanical properties and the mechanisms responsible for failure.

  19. A 3D numerical study of LO2/GH2 supercritical combustion in the ONERA-Mascotte Test-rig configuration

    Science.gov (United States)

    Benmansour, Abdelkrim; Liazid, Abdelkrim; Logerais, Pierre-Olivier; Durastanti, Jean-Félix

    2016-02-01

    Cryogenic propellants LOx/H2 are used at very high pressure in rocket engine combustion. The description of the combustion process in such application is very complex due essentially to the supercritical regime. Ideal gas law becomes invalid. In order to try to capture the average characteristics of this combustion process, numerical computations are performed using a model based on a one-phase multi-component approach. Such work requires fluid properties and a correct definition of the mixture behavior generally described by cubic equations of state with appropriated thermodynamic relations validated against the NIST data. In this study we consider an alternative way to get the effect of real gas by testing the volume-weighted-mixing-law with association of the component transport properties using directly the NIST library data fitting including the supercritical regime range. The numerical simulations are carried out using 3D RANS approach associated with two tested turbulence models, the standard k-Epsilon model and the realizable k-Epsilon one. The combustion model is also associated with two chemical reaction mechanisms. The first one is a one-step generic chemical reaction and the second one is a two-step chemical reaction. The obtained results like temperature profiles, recirculation zones, visible flame lengths and distributions of OH species are discussed.

  20. Rigorous force field optimization principles based on statistical distance minimization

    Energy Technology Data Exchange (ETDEWEB)

    Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)

    2015-10-14

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.

  1. Recent results of seismic isolation study in CRIEPI: Numerical activities

    International Nuclear Information System (INIS)

    Shiojiri, Hiroo; Ishida, Katsuhiko; Yabana, Shurichi; Hirata, Kazuta

    1992-01-01

    Development of detailed numerical models of a bearing and the related isolation system Is necessary for establishing the rational design of the bearing and the system. The developed numerical models should be validated regarding the physical parameters and the basic assumption by comparing the experimental results with the numerical ones. The numerical work being conducted in CRIEPI consists of the following items: (1) Simple modeling of the behavior of the bearings capable of approximating the tests on bearings, and the validation of the model for the bearing by comparing the numerical results adopting the models with the shaking table tests results; (2) Detailed three-dimensional modeling of single bearings with finite-element codes, and the experimental validation of the model; (3)Simple and detailed three-dimensional modeling of isolation buildings and experimental validation

  2. Pile-Reinforcement Behavior of Cohesive Soil Slopes: Numerical Modeling and Centrifuge Testing

    Directory of Open Access Journals (Sweden)

    Liping Wang

    2013-01-01

    Full Text Available Centrifuge model tests were conducted on pile-reinforced and unreinforced cohesive soil slopes to investigate the fundamental behavior and reinforcement mechanism. A finite element analysis model was established and confirmed to be effective in capturing the primary behavior of pile-reinforced slopes by comparing its predictions with experimental results. Thus, a comprehensive understanding of the stress-deformation response was obtained by combining the numerical and physical simulations. The response of pile-reinforced slope was indicated to be significantly affected by pile spacing, pile location, restriction style of pile end, and inclination of slope. The piles have a significant effect on the behavior of reinforced slope, and the influencing area was described using a continuous surface, denoted as W-surface. The reinforcement mechanism was described using two basic concepts, compression effect and shear effect, respectively, referring to the piles increasing the compression strain and decreasing the shear strain of the slope in comparison with the unreinforced slope. The pile-soil interaction induces significant compression effect in the inner zone near the piles; this effect is transferred to the upper part of the slope, with the shear effect becoming prominent to prevent possible sliding of unreinforced slope.

  3. Numerical shaping of the ultrasonic wavelet

    International Nuclear Information System (INIS)

    Bonis, M.

    1991-01-01

    Improving the performance and the quality of ultrasonic testing requires the numerical control of the shape of the driving signal applied to the piezoelectric transducer. This allows precise shaping of the ultrasonic field wavelet and corrections for the physical defects of the transducer, which are mainly due to the damper or the lens. It also does away with the need for an accurate electric matching. It then becomes feasible to characterize, a priori, the ultrasonic wavelet by means of temporal and/or spectral specifications and to use, subsequently, an adaptative algorithm to calculate the corresponding driving wavelet. Moreover, the versatility resulting from the numerical control of this wavelet allows it to be changed in real time during a test

  4. Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D

    Science.gov (United States)

    Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas

    2017-11-01

    One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.

  5. The Researchers' View of Scientific Rigor-Survey on the Conduct and Reporting of In Vivo Research.

    Science.gov (United States)

    Reichlin, Thomas S; Vogt, Lucile; Würbel, Hanno

    2016-01-01

    Reproducibility in animal research is alarmingly low, and a lack of scientific rigor has been proposed as a major cause. Systematic reviews found low reporting rates of measures against risks of bias (e.g., randomization, blinding), and a correlation between low reporting rates and overstated treatment effects. Reporting rates of measures against bias are thus used as a proxy measure for scientific rigor, and reporting guidelines (e.g., ARRIVE) have become a major weapon in the fight against risks of bias in animal research. Surprisingly, animal scientists have never been asked about their use of measures against risks of bias and how they report these in publications. Whether poor reporting reflects poor use of such measures, and whether reporting guidelines may effectively reduce risks of bias has therefore remained elusive. To address these questions, we asked in vivo researchers about their use and reporting of measures against risks of bias and examined how self-reports relate to reporting rates obtained through systematic reviews. An online survey was sent out to all registered in vivo researchers in Switzerland (N = 1891) and was complemented by personal interviews with five representative in vivo researchers to facilitate interpretation of the survey results. Return rate was 28% (N = 530), of which 302 participants (16%) returned fully completed questionnaires that were used for further analysis. According to the researchers' self-report, they use measures against risks of bias to a much greater extent than suggested by reporting rates obtained through systematic reviews. However, the researchers' self-reports are likely biased to some extent. Thus, although they claimed to be reporting measures against risks of bias at much lower rates than they claimed to be using these measures, the self-reported reporting rates were considerably higher than reporting rates found by systematic reviews. Furthermore, participants performed rather poorly when asked to

  6. Smoothing of Transport Plans with Fixed Marginals and Rigorous Semiclassical Limit of the Hohenberg-Kohn Functional

    Science.gov (United States)

    Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia

    2018-06-01

    We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.

  7. Muscle pH, rigor mortis and blood variables in Atlantic salmon transported in two types of well-boat.

    Science.gov (United States)

    Gatica, M C; Monti, G E; Knowles, T G; Gallo, C B

    2010-01-09

    Two systems for transporting live salmon (Salmo salar) were compared in terms of their effects on blood variables, muscle pH and rigor index: an 'open system' well-boat with recirculated sea water at 13.5 degrees C and a stocking density of 107 kg/m3 during an eight-hour journey, and a 'closed system' well-boat with water chilled from 16.7 to 2.1 degrees C and a stocking density of 243.7 kg/m3 during a seven-hour journey. Groups of 10 fish were sampled at each of four stages: in cages at the farm, in the well-boat after loading, in the well-boat after the journey and before unloading, and in the processing plant after they were pumped from the resting cages. At each sampling, the fish were stunned and bled by gill cutting. Blood samples were taken to measure lactate, osmolality, chloride, sodium, cortisol and glucose, and their muscle pH and rigor index were measured at death and three hours later. In the open system well-boat, the initial muscle pH of the fish decreased at each successive stage, and at the final stage they had a significantly lower initial muscle pH and more rapid onset of rigor than the fish transported on the closed system well-boat. At the final stage all the blood variables except glucose were significantly affected in the fish transported on both types of well-boat.

  8. Seismic behavior of breakwaters on complex ground by numerical tests: Liquefaction and post liquefaction ground settlements

    Science.gov (United States)

    Gu, Linlin; Zhang, Feng; Bao, Xiaohua; Shi, Zhenming; Ye, Guanlin; Ling, Xianzhang

    2018-04-01

    A large number of breakwaters have been constructed along coasts to protect humans and infrastructures from tsunamis. There is a risk that foundation soils of these structures may liquefy, or partially liquefy during the earthquake preceding a tsunami, which would greatly reduce the structures' capacity to resist the tsunami. It is necessary to consider not only the soil's liquefaction behavior due to earthquake motions but also its post-liquefaction behavior because this behavior will affect the breakwater's capacity to resist an incoming tsunami. In this study, numerical tests based on a sophisticated constitutive model and a soil-water coupled finite element method are used to predict the mechanical behavior of breakwaters and the surrounding soils. Two real breakwaters subjected to two different seismic excitations are examined through numerical simulation. The simulation results show that, earthquakes affect not only the immediate behavior of breakwaters and the surrounding soils but also their long-term settlements due to post-earthquake consolidation. A soil profile with thick clayey layers beneath liquefied soil is more vulnerable to tsunami than a soil profile with only sandy layers. Therefore, quantitatively evaluating the seismic behavior of breakwaters and surrounding soils is important for the design of breakwater structures to resist tsunamis.

  9. Code-experiment comparison on wall condensation tests in the presence of non-condensable gases-Numerical calculations for containment studies

    Energy Technology Data Exchange (ETDEWEB)

    Malet, J., E-mail: jeanne.malet@irsn.fr [Institut de Radioprotection et de Surete Nucleaire (IRSN), PSN-RES, SCA, BP 68, 91192 Gif-sur-Yvette (France); Porcheron, E.; Dumay, F.; Vendel, J. [Institut de Radioprotection et de Surete Nucleaire (IRSN), PSN-RES, SCA, BP 68, 91192 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Steam condensation on walls has been investigated in the TOSQAN vessel. Black-Right-Pointing-Pointer Experiments on 7 different tests have been performed. Black-Right-Pointing-Pointer Different steam injections and wall temperatures are used. Black-Right-Pointing-Pointer Simulations are performed in 2D using the TONUS code. Black-Right-Pointing-Pointer Code-experiments comparisons at many different locations show a good agreement. - Abstract: During the course of a severe Pressurized Water Reactor accident, pressurization of the containment occurs and hydrogen can be produced by the reactor core oxidation and distributed in the containment according to convection flows and wall condensation. Filmwise wall condensation in the presence of non-condensable gases is a subject of many interests and extensive studies have been performed in the past. Some empirical correlations have demonstrated their limit for extrapolation under different thermal-hydraulic conditions and at different geometries/scales. The French Institute for Radiological Protection and Nuclear Safety (IRSN) has developed a numerical tool and an experimental facility in order to investigate free convection flows in the presence of condensation. The objective of this paper is to present numerical results obtained on different wall condensation tests in 7 m{sup 3} volume vessel (TOSQAN facility), and to compare them with the experimental ones. Over eight tests are considered here, and code-experiment comparison is performed on many different locations, giving an extensive insight of the code assessment for air-steam mixture flows involving wall condensation in the presence of non-condensable gases.

  10. Simulation of Wave Overtopping of Maritime Structures in a Numerical Wave Flume

    Directory of Open Access Journals (Sweden)

    Tiago C. A. Oliveira

    2012-01-01

    Full Text Available A numerical wave flume based on the particle finite element method (PFEM is applied to simulate wave overtopping for impermeable maritime structures. An assessment of the performance and robustness of the numerical wave flume is carried out for two different cases comparing numerical results with experimental data. In the first case, a well-defined benchmark test of a simple low-crested structure overtopped by regular nonbreaking waves is presented, tested in the lab, and simulated in the numerical wave flume. In the second case, state-of-the-art physical experiments of a trapezoidal structure placed on a sloping beach overtopped by regular breaking waves are simulated in the numerical wave flume. For both cases, main overtopping events are well detected by the numerical wave flume. However, nonlinear processes controlling the tests proposed, such as nonlinear wave generation, energy losses along the wave propagation track, wave reflection, and overtopping events, are reproduced with more accuracy in the first case. Results indicate that a numerical wave flume based on the PFEM can be applied as an efficient tool to supplement physical models, semiempirical formulations, and other numerical techniques to deal with overtopping of maritime structures.

  11. The MINERVA Software Development Process

    Science.gov (United States)

    Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.

    2017-01-01

    This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.

  12. Numerical Application of a Stick-Slip Control and Experimental Analysis using a Test Rig

    Directory of Open Access Journals (Sweden)

    Pereira Leonardo D.

    2018-01-01

    Full Text Available Part of the process of exploration and development of an oil field consists of the drilling operations for oil and gas wells. Particularly for deep water and ultra deep water wells, the operation requires the control of a very flexible structure which is subjected to complex boundary conditions such as the nonlinear interactions between drill bit and rock formation and between the drill string and borehole wall. Concerning this complexity, the stick-slip phenomenon is a major component related to the torsional vibration and it can excite both axial and lateral vibrations. With these intentions, this paper has the main goal of confronting the torsional vibration problem over a test rig numerical model using a real-time conventional controller. The system contains two discs in which dry friction torques are applied. Therefore, the dynamical behaviour were analysed with and without controlling strategies.

  13. The rigorous stochastic matrix multiplication scheme for the calculations of reduced equilibrium density matrices of open multilevel quantum systems

    International Nuclear Information System (INIS)

    Chen, Xin

    2014-01-01

    Understanding the roles of the temporary and spatial structures of quantum functional noise in open multilevel quantum molecular systems attracts a lot of theoretical interests. I want to establish a rigorous and general framework for functional quantum noises from the constructive and computational perspectives, i.e., how to generate the random trajectories to reproduce the kernel and path ordering of the influence functional with effective Monte Carlo methods for arbitrary spectral densities. This construction approach aims to unify the existing stochastic models to rigorously describe the temporary and spatial structure of Gaussian quantum noises. In this paper, I review the Euclidean imaginary time influence functional and propose the stochastic matrix multiplication scheme to calculate reduced equilibrium density matrices (REDM). In addition, I review and discuss the Feynman-Vernon influence functional according to the Gaussian quadratic integral, particularly its imaginary part which is critical to the rigorous description of the quantum detailed balance. As a result, I establish the conditions under which the influence functional can be interpreted as the average of exponential functional operator over real-valued Gaussian processes for open multilevel quantum systems. I also show the difference between the local and nonlocal phonons within this framework. With the stochastic matrix multiplication scheme, I compare the normalized REDM with the Boltzmann equilibrium distribution for open multilevel quantum systems

  14. RIGOROUS PHOTOGRAMMETRIC PROCESSING OF CHANG'E-1 AND CHANG'E-2 STEREO IMAGERY FOR LUNAR TOPOGRAPHIC MAPPING

    Directory of Open Access Journals (Sweden)

    K. Di

    2012-07-01

    Full Text Available Chang'E-1(CE-1 and Chang'E-2(CE-2 are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1 refining EOPs by correcting the attitude angle bias, 2 refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model and DOM (Digital Ortho Map are automatically generated.

  15. Intentional and automatic processing of numerical information in mathematical anxiety: testing the influence of emotional priming.

    Science.gov (United States)

    Ashkenazi, Sarit

    2018-02-05

    Current theoretical approaches suggest that mathematical anxiety (MA) manifests itself as a weakness in quantity manipulations. This study is the first to examine automatic versus intentional processing of numerical information using the numerical Stroop paradigm in participants with high MA. To manipulate anxiety levels, we combined the numerical Stroop task with an affective priming paradigm. We took a group of college students with high MA and compared their performance to a group of participants with low MA. Under low anxiety conditions (neutral priming), participants with high MA showed relatively intact number processing abilities. However, under high anxiety conditions (mathematical priming), participants with high MA showed (1) higher processing of the non-numerical irrelevant information, which aligns with the theoretical view regarding deficits in selective attention in anxiety and (2) an abnormal numerical distance effect. These results demonstrate that abnormal, basic numerical processing in MA is context related.

  16. Numerical and experimental investigations on cavitation erosion

    Science.gov (United States)

    Fortes Patella, R.; Archer, A.; Flageul, C.

    2012-11-01

    A method is proposed to predict cavitation damage from cavitating flow simulations. For this purpose, a numerical process coupling cavitating flow simulations and erosion models was developed and applied to a two-dimensional (2D) hydrofoil tested at TUD (Darmstadt University of Technology, Germany) [1] and to a NACA 65012 tested at LMH-EPFL (Lausanne Polytechnic School) [2]. Cavitation erosion tests (pitting tests) were carried out and a 3D laser profilometry was used to analyze surfaces damaged by cavitation [3]. The method allows evaluating the pit characteristics, and mainly the volume damage rates. The paper describes the developed erosion model, the technique of cavitation damage measurement and presents some comparisons between experimental results and numerical damage predictions. The extent of cavitation erosion was correctly estimated in both hydrofoil geometries. The simulated qualitative influence of flow velocity, sigma value and gas content on cavitation damage agreed well with experimental observations.

  17. Experimental/numerical acoustic correlation of helicopter unsteady MANOEUVRES

    NARCIS (Netherlands)

    Gennaretti, Massimo; Bernardini, Giovanni; Hartjes, S.; Scandroglio, Alessandro; Riviello, Luca; Paolone, Enrico

    2016-01-01

    This paper presents one of the main objective of WP1 of Clean Sky GRC5 MANOEUVRES project, which consists in the correlation of ground noise data measured during flight tests, with numerical predictions obtained by a numerical process aimed at the analysis of the acoustic field emitted by

  18. Rigorous derivation of porous-media phase-field equations

    Science.gov (United States)

    Schmuck, Markus; Kalliadasis, Serafim

    2017-11-01

    The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.

  19. Behavioral modeling of SRIM tables for numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Martinie, S., E-mail: sebastien.martinie@cea.fr; Saad-Saoud, T.; Moindjie, S.; Munteanu, D.; Autran, J.L., E-mail: jean-luc.autran@univ-amu.fr

    2014-03-01

    Highlights: • Behavioral modeling of SRIM data is performed on the basis of power polynomial fitting functions. • Fast and continuous numerical functions are proposed for the stopping power and projected range. • Functions have been successfully tested for a wide variety of ions and targets. • Typical accuracies below the percent have been obtained in the range 1 keV–1 GeV. - Abstract: This work describes a simple way to implement SRIM stopping power and range tabulated data in the form of fast and continuous numerical functions for intensive simulation. We provide here the methodology of this behavioral modeling as well as the details of the implementation and some numerical examples for ions in silicon target. Developed functions have been successfully tested and used for the simulation of soft errors in microelectronics circuits.

  20. Behavioral modeling of SRIM tables for numerical simulation

    International Nuclear Information System (INIS)

    Martinie, S.; Saad-Saoud, T.; Moindjie, S.; Munteanu, D.; Autran, J.L.

    2014-01-01

    Highlights: • Behavioral modeling of SRIM data is performed on the basis of power polynomial fitting functions. • Fast and continuous numerical functions are proposed for the stopping power and projected range. • Functions have been successfully tested for a wide variety of ions and targets. • Typical accuracies below the percent have been obtained in the range 1 keV–1 GeV. - Abstract: This work describes a simple way to implement SRIM stopping power and range tabulated data in the form of fast and continuous numerical functions for intensive simulation. We provide here the methodology of this behavioral modeling as well as the details of the implementation and some numerical examples for ions in silicon target. Developed functions have been successfully tested and used for the simulation of soft errors in microelectronics circuits

  1. An extended continuum model considering optimal velocity change with memory and numerical tests

    Science.gov (United States)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  2. Numerical investigation of sixth order Boussinesq equation

    Science.gov (United States)

    Kolkovska, N.; Vucheva, V.

    2017-10-01

    We propose a family of conservative finite difference schemes for the Boussinesq equation with sixth order dispersion terms. The schemes are of second order of approximation. The method is conditionally stable with a mild restriction τ = O(h) on the step sizes. Numerical tests are performed for quadratic and cubic nonlinearities. The numerical experiments show second order of convergence of the discrete solution to the exact one.

  3. Trends in software testing

    CERN Document Server

    Mohanty, J; Balakrishnan, Arunkumar

    2017-01-01

    This book is focused on the advancements in the field of software testing and the innovative practices that the industry is adopting. Considering the widely varied nature of software testing, the book addresses contemporary aspects that are important for both academia and industry. There are dedicated chapters on seamless high-efficiency frameworks, automation on regression testing, software by search, and system evolution management. There are a host of mathematical models that are promising for software quality improvement by model-based testing. There are three chapters addressing this concern. Students and researchers in particular will find these chapters useful for their mathematical strength and rigor. Other topics covered include uncertainty in testing, software security testing, testing as a service, test technical debt (or test debt), disruption caused by digital advancement (social media, cloud computing, mobile application and data analytics), and challenges and benefits of outsourcing. The book w...

  4. Development of Pelton turbine using numerical simulation

    International Nuclear Information System (INIS)

    Patel, K; Patel, B; Yadav, M; Foggia, T

    2010-01-01

    This paper describes recent research and development activities in the field of Pelton turbine design. Flow inside Pelton turbine is most complex due to multiphase (mixture of air and water) and free surface in nature. Numerical calculation is useful to understand flow physics as well as effect of geometry on flow. The optimized design is obtained using in-house special optimization loop. Either single phase or two phase unsteady numerical calculation could be performed. Numerical results are used to visualize the flow pattern in the water passage and to predict performance of Pelton turbine at full load as well as at part load. Model tests are conducted to determine performance of turbine and it shows good agreement with numerically predicted performance.

  5. Development of Pelton turbine using numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Patel, K; Patel, B; Yadav, M [Hydraulic Engineer, ALSTOM Hydro R and D India Ltd., GIDC Maneja, Vadodara - 390 013, Gujarat (India); Foggia, T, E-mail: patel@power.alstom.co [Hydraulic Engineer, Alstom Hydro France, Etablissement de Grenoble, 82, avenue Leon Blum BP 75, 38041 Grenoble Cedex (France)

    2010-08-15

    This paper describes recent research and development activities in the field of Pelton turbine design. Flow inside Pelton turbine is most complex due to multiphase (mixture of air and water) and free surface in nature. Numerical calculation is useful to understand flow physics as well as effect of geometry on flow. The optimized design is obtained using in-house special optimization loop. Either single phase or two phase unsteady numerical calculation could be performed. Numerical results are used to visualize the flow pattern in the water passage and to predict performance of Pelton turbine at full load as well as at part load. Model tests are conducted to determine performance of turbine and it shows good agreement with numerically predicted performance.

  6. Development of Pelton turbine using numerical simulation

    Science.gov (United States)

    Patel, K.; Patel, B.; Yadav, M.; Foggia, T.

    2010-08-01

    This paper describes recent research and development activities in the field of Pelton turbine design. Flow inside Pelton turbine is most complex due to multiphase (mixture of air and water) and free surface in nature. Numerical calculation is useful to understand flow physics as well as effect of geometry on flow. The optimized design is obtained using in-house special optimization loop. Either single phase or two phase unsteady numerical calculation could be performed. Numerical results are used to visualize the flow pattern in the water passage and to predict performance of Pelton turbine at full load as well as at part load. Model tests are conducted to determine performance of turbine and it shows good agreement with numerically predicted performance.

  7. Numerical study of glare spot phase Doppler anemometry

    Science.gov (United States)

    Hespel, C.; Ren, K. F.; Gréhan, G.; Onofri, F.

    2008-03-01

    The phase Doppler anemometry has (PDA) been developed to measure simultaneously the velocity and the size of droplets. When the concentration of particles is high, tightly focused beams must be used, as in the dual burst PDA. The latter permits an access to the refractive index of the particle, but the effect of wave front curvature of the incident beams becomes evident. In this paper, we introduce a glare spot phase Doppler anemometry which uses two large beams. The images of the particle formed by the reflected and refracted light, known as glare spots, are separated in space. When a particle passes through the probe volume, the two parts in a signal obtained by a detector in forward direction are then separated in time. If two detectors are used the phase differences and the intensity ratios between two signals, the distance between the reflected and refracted spots can be obtained. These measured values provide information about the particle diameter and its refractive index, as well as its two velocity components. This paper is devoted to the numerical study of such a configuration with two theoretical models: geometrical optics and rigorous electromagnetism solution.

  8. Numerical studies of fermionic field theories at large-N

    International Nuclear Information System (INIS)

    Dickens, T.A.

    1987-01-01

    A description of an algorithm, which may be used to study large-N theories with or without fermions, is presented. As an initial test of the method, the spectrum of continuum QCD in 1 + 1 dimensions is determined and compared to previously obtained results. Exact solutions of 1 + 1 dimensional lattice versions of the free fermion theory, the Gross-Neveu model, and QCD are obtained. Comparison of these exact results with results from the numerical algorithm is used to test the algorithms, and more importantly, to determine the errors incurred from the approximations used in the numerical technique. Numerical studies of the above three lattice theories in higher dimensions are also presented. The results are again compared to exact solutions for free fermions and the Gross-Neveu model; perturbation theory is used to derive expansions with which the numerical results for QCD may be compared. The numerical algorithm may also be used to study the euclidean formulation of lattice gauge theories. Results for 1 + 1 dimensional euclidean lattice QCD are compared to the exact solution of this model

  9. Standards and Methodological Rigor in Pulmonary Arterial Hypertension Preclinical and Translational Research.

    Science.gov (United States)

    Provencher, Steeve; Archer, Stephen L; Ramirez, F Daniel; Hibbert, Benjamin; Paulin, Roxane; Boucherat, Olivier; Lacasse, Yves; Bonnet, Sébastien

    2018-03-30

    Despite advances in our understanding of the pathophysiology and the management of pulmonary arterial hypertension (PAH), significant therapeutic gaps remain for this devastating disease. Yet, few innovative therapies beyond the traditional pathways of endothelial dysfunction have reached clinical trial phases in PAH. Although there are inherent limitations of the currently available models of PAH, the leaky pipeline of innovative therapies relates, in part, to flawed preclinical research methodology, including lack of rigour in trial design, incomplete invasive hemodynamic assessment, and lack of careful translational studies that replicate randomized controlled trials in humans with attention to adverse effects and benefits. Rigorous methodology should include the use of prespecified eligibility criteria, sample sizes that permit valid statistical analysis, randomization, blinded assessment of standardized outcomes, and transparent reporting of results. Better design and implementation of preclinical studies can minimize inherent flaws in the models of PAH, reduce the risk of bias, and enhance external validity and our ability to distinguish truly promising therapies form many false-positive or overstated leads. Ideally, preclinical studies should use advanced imaging, study several preclinical pulmonary hypertension models, or correlate rodent and human findings and consider the fate of the right ventricle, which is the major determinant of prognosis in human PAH. Although these principles are widely endorsed, empirical evidence suggests that such rigor is often lacking in pulmonary hypertension preclinical research. The present article discusses the pitfalls in the design of preclinical pulmonary hypertension trials and discusses opportunities to create preclinical trials with improved predictive value in guiding early-phase drug development in patients with PAH, which will need support not only from researchers, peer reviewers, and editors but also from

  10. Use of Silicon Carbide as Beam Intercepting Device Material: Tests, Issues and Numerical Simulations

    CERN Document Server

    Delonca, M; Gil Costa, M; Vacca, A

    2014-01-01

    Silicon Carbide (SiC) stands as one of the most promising ceramic material with respect to its thermal shock resistance and mechanical strengths. It has hence been considered as candidate material for the development of higher performance beam intercepting devices at CERN. Its brazing with a metal counterpart has been tested and characterized by means of microstructural and ultrasound techniques. Despite the positive results, its use has to be evaluated with care, due to the strong evidence in literature of large and permanent volumetric expansion, called swelling, under the effect of neutron and ion irradiation. This may cause premature and sudden failure, and can be mitigated to some extent by operating at high temperature. For this reason limited information is available for irradiation below 100°C, which is the typical temperature of interest for beam intercepting devices like dumps or collimators. This paper describes the brazing campaign carried out at CERN, the results, and the theoretical and numeric...

  11. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  12. Case studies in the numerical solution of oscillatory integrals

    International Nuclear Information System (INIS)

    Adam, G.

    1992-06-01

    A numerical solution of a number of 53,249 test integrals belonging to nine parametric classes was attempted by two computer codes: EAQWOM (Adam and Nobile, IMA Journ. Numer. Anal. (1991) 11, 271-296) and DO1ANF (Mark 13, 1988) from the NAG library software. For the considered test integrals, EAQWOM was found to be superior to DO1ANF as it concerns robustness, reliability, and friendly user information in case of failure. (author). 9 refs, 3 tabs

  13. Is Collaborative, Community-Engaged Scholarship More Rigorous than Traditional Scholarship? On Advocacy, Bias, and Social Science Research

    Science.gov (United States)

    Warren, Mark R.; Calderón, José; Kupscznk, Luke Aubry; Squires, Gregory; Su, Celina

    2018-01-01

    Contrary to the charge that advocacy-oriented research cannot meet social science research standards because it is inherently biased, the authors of this article argue that collaborative, community-engaged scholarship (CCES) must meet high standards of rigor if it is to be useful to support equity-oriented, social justice agendas. In fact, they…

  14. A ''new'' approach to the quantitative statistical dynamics of plasma turbulence: The optimum theory of rigorous bounds on steady-state transport

    International Nuclear Information System (INIS)

    Krommes, J.A.; Kim, Chang-Bae

    1990-06-01

    The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulence electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: the dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain upon considering all bounding calculations to date one can conclude, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research. 29 refs

  15. A ''new'' approach to the quantitative statistical dynamics of plasma turbulence: The optimum theory of rigorous bounds on steady-state transport

    International Nuclear Information System (INIS)

    Krommes, J.A.; Kim, C.

    1990-01-01

    The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulent electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: The dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain, upon considering all bounding calculations to date it can be concluded, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research

  16. Guidelines for conducting rigorous health care psychosocial cross-cultural/language qualitative research.

    Science.gov (United States)

    Arriaza, Pablo; Nedjat-Haiem, Frances; Lee, Hee Yun; Martin, Shadi S

    2015-01-01

    The purpose of this article is to synthesize and chronicle the authors' experiences as four bilingual and bicultural researchers, each experienced in conducting cross-cultural/cross-language qualitative research. Through narrative descriptions of experiences with Latinos, Iranians, and Hmong refugees, the authors discuss their rewards, challenges, and methods of enhancing rigor, trustworthiness, and transparency when conducting cross-cultural/cross-language research. The authors discuss and explore how to effectively manage cross-cultural qualitative data, how to effectively use interpreters and translators, how to identify best methods of transcribing data, and the role of creating strong community relationships. The authors provide guidelines for health care professionals to consider when engaging in cross-cultural qualitative research.

  17. Testing the effects of basic numerical implementations of water migration on models of subduction dynamics

    Science.gov (United States)

    Quinquis, M. E. T.; Buiter, S. J. H.

    2014-06-01

    Subduction of oceanic lithosphere brings water into the Earth's upper mantle. Previous numerical studies have shown how slab dehydration and mantle hydration can impact the dynamics of a subduction system by allowing a more vigorous mantle flow and promoting localisation of deformation in the lithosphere and mantle. The depths at which dehydration reactions occur in the hydrated portions of the slab are well constrained in these models by thermodynamic calculations. However, computational models use different numerical schemes to simulate the migration of free water. We aim to show the influence of the numerical scheme of free water migration on the dynamics of the upper mantle and more specifically the mantle wedge. We investigate the following three simple migration schemes with a finite-element model: (1) element-wise vertical migration of free water, occurring independent of the flow of the solid phase; (2) an imposed vertical free water velocity; and (3) a Darcy velocity, where the free water velocity is a function of the pressure gradient caused by the difference in density between water and the surrounding rocks. In addition, the flow of the solid material field also moves the free water in the imposed vertical velocity and Darcy schemes. We first test the influence of the water migration scheme using a simple model that simulates the sinking of a cold, hydrated cylinder into a dry, warm mantle. We find that the free water migration scheme has only a limited impact on the water distribution after 1 Myr in these models. We next investigate slab dehydration and mantle hydration with a thermomechanical subduction model that includes brittle behaviour and viscous water-dependent creep flow laws. Our models demonstrate that the bound water distribution is not greatly influenced by the water migration scheme whereas the free water distribution is. We find that a bound water-dependent creep flow law results in a broader area of hydration in the mantle wedge, which

  18. Testing the effects of the numerical implementation of water migration on models of subduction dynamics

    Science.gov (United States)

    Quinquis, M. E. T.; Buiter, S. J. H.

    2013-10-01

    Subduction of oceanic lithosphere brings water into Earth's upper mantle. Previous numerical studies have shown how slab dehydration and mantle hydration can impact the dynamics of a subduction system by allowing a more vigorous mantle flow and promoting localisation of deformation in lithosphere and mantle. The depths at which dehydration reactions occur in the hydrated portions of the slab are well constrained in these models by thermodynamic calculations. However, the mechanism by which free water migrates in the mantle is incompletely known. Therefore, models use different numerical schemes to model the migration of free water. We aim to show the influence of the numerical scheme of free water migration on the dynamics of the upper mantle and more specifically the mantle wedge. We investigate the following three migration schemes with a finite-element model: (1) element-wise vertical migration of free water, occurring independent of the material flow; (2) an imposed vertical free water velocity; and (3) a Darcy velocity, where the free water velocity is calculated as a function of the pressure gradient between water and the surrounding rocks. In addition, the material flow field also moves the free water in the imposed vertical velocity and Darcy schemes. We first test the influence of the water migration scheme using a simple Stokes flow model that simulates the sinking of a cold hydrated cylinder into a hot dry mantle. We find that the free water migration scheme has only a limited impact on the water distribution after 1 Myr in these models. We next investigate slab dehydration and mantle hydration with a thermomechanical subduction model that includes brittle behaviour and viscous water-dependent creep flow laws. Our models show how the bound water distribution is not greatly influenced by the water migration scheme whereas the free water distribution is. We find that a water-dependent creep flow law results in a broader area of hydration in the mantle

  19. Reconsideration of the sequence of rigor mortis through postmortem changes in adenosine nucleotides and lactic acid in different rat muscles.

    Science.gov (United States)

    Kobayashi, M; Takatori, T; Iwadate, K; Nakajima, M

    1996-10-25

    We examined the changes in adenosine triphosphate (ATP), lactic acid, adenosine diphosphate (ADP) and adenosine monophosphate (AMP) in five different rat muscles after death. Rigor mortis has been thought to occur simultaneously in dead muscles and hence to start in small muscles sooner than in large muscles. In this study we found that the rate of decrease in ATP was significantly different in each muscle. The greatest drop in ATP was observed in the masseter muscle. These findings contradict the conventional theory of rigor mortis. Similarly, the rates of change in ADP and lactic acid, which are thought to be related to the consumption or production of ATP, were different in each muscle. However, the rate of change of AMP was the same in each muscle.

  20. Development of analytical and numerical models for the assessment and interpretation of hydrogeological field tests

    Energy Technology Data Exchange (ETDEWEB)

    Mironenko, V.A.; Rumynin, V.G.; Konosavsky, P.K. [St. Petersburg Mining Inst. (Russian Federation); Pozdniakov, S.P.; Shestakov, V.M. [Moscow State Univ. (Russian Federation); Roshal, A.A. [Geosoft-Eastlink, Moscow (Russian Federation)

    1994-07-01

    Mathematical models of the flow and tracer tests in fractured aquifers are being developed for the further study of radioactive wastes migration in round water at the Lake Area, which is associated with one of the waste disposal site in Russia. The choice of testing methods, tracer types (chemical or thermal) and the appropriate models are determined by the nature of the ongoing ground-water pollution processes and the hydrogeological features of the site under consideration. Special importance is attached to the increased density of wastes as well as to the possible redistribution of solutes both in the liquid phase and in the absorbed state (largely, on fracture surfaces). This allows for studying physical-and-chemical (hydrogeochemical) interaction parameters which are hard to obtain (considering a fractured structure of the rock mass) in laboratory. Moreover, a theoretical substantiation is being given to the field methods of studying the properties of a fractured stratum aimed at the further construction of the drainage system or the subsurface flow barrier (cutoff wall), as well as the monitoring system that will evaluate the reliability of these ground-water protection measures. The proposed mathematical models are based on a tight combination of analytical and numerical methods, the former being preferred in solving the principal (2D axisymmetrical) class of the problems. The choice of appropriate problems is based on the close feedback with subsequent field tests in the Lake Area. 63 refs.

  1. Development of analytical and numerical models for the assessment and interpretation of hydrogeological field tests

    International Nuclear Information System (INIS)

    Mironenko, V.A.; Rumynin, V.G.; Konosavsky, P.K.; Pozdniakov, S.P.; Shestakov, V.M.; Roshal, A.A.

    1994-07-01

    Mathematical models of the flow and tracer tests in fractured aquifers are being developed for the further study of radioactive wastes migration in round water at the Lake Area, which is associated with one of the waste disposal site in Russia. The choice of testing methods, tracer types (chemical or thermal) and the appropriate models are determined by the nature of the ongoing ground-water pollution processes and the hydrogeological features of the site under consideration. Special importance is attached to the increased density of wastes as well as to the possible redistribution of solutes both in the liquid phase and in the absorbed state (largely, on fracture surfaces). This allows for studying physical-and-chemical (hydrogeochemical) interaction parameters which are hard to obtain (considering a fractured structure of the rock mass) in laboratory. Moreover, a theoretical substantiation is being given to the field methods of studying the properties of a fractured stratum aimed at the further construction of the drainage system or the subsurface flow barrier (cutoff wall), as well as the monitoring system that will evaluate the reliability of these ground-water protection measures. The proposed mathematical models are based on a tight combination of analytical and numerical methods, the former being preferred in solving the principal (2D axisymmetrical) class of the problems. The choice of appropriate problems is based on the close feedback with subsequent field tests in the Lake Area. 63 refs

  2. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  3. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  4. Testing Hubbert

    International Nuclear Information System (INIS)

    Brandt, Adam R.

    2007-01-01

    The Hubbert theory of oil depletion, which states that oil production in large regions follows a bell-shaped curve over time, has been cited as a method to predict the future of global oil production. However, the assumptions of the Hubbert method have never been rigorously tested with a large, publicly available data set. In this paper, three assumptions of the modern Hubbert theory are tested using data from 139 oil producing regions. These regions are sub-national (United States state-level, United States regional-level), national, and multi-national (subcontinental and continental) in scale. We test the assumption that oil production follows a bell-shaped curve by generating best-fitting curves for each region using six models and comparing the quality of fit across models. We also test the assumptions that production over time in a region tends to be symmetric, and that production is more bell-shaped in larger regions than in smaller regions

  5. Useful, Used, and Peer Approved: The Importance of Rigor and Accessibility in Postsecondary Research and Evaluation. WISCAPE Viewpoints

    Science.gov (United States)

    Vaade, Elizabeth; McCready, Bo

    2012-01-01

    Traditionally, researchers, policymakers, and practitioners have perceived a tension between rigor and accessibility in quantitative research and evaluation in postsecondary education. However, this study indicates that both producers and consumers of these studies value high-quality work and clear findings that can reach multiple audiences. The…

  6. Proof testing of CANDU concrete containment structures

    International Nuclear Information System (INIS)

    Pandey, M.D.

    1996-05-01

    Prior to commissioning of a CANDU reactor, a proof pressure test is required to demonstrate the structural integrity of the containment envelope. The test pressure specified by AECB Regulatory Document R-7 (1991) was selected without a rigorous consideration of uncertainties associated with estimates of accident pressure and conatinment resistance. This study was undertaken to develop a reliability-based philosophy for defining proof testing requirements that are consistent with the current limit states design code for concrete containments (CSA N287.3).It was shown that the upodated probability of failure after a successful test is always less than the original estimate

  7. Evaluation of conceptual and numerical models for arsenic mobilization and attenuation during managed aquifer recharge.

    Science.gov (United States)

    Wallis, Ilka; Prommer, Henning; Simmons, Craig T; Post, Vincent; Stuyfzand, Pieter J

    2010-07-01

    Managed Aquifer Recharge (MAR) is promoted as an attractive technique to meet growing water demands. An impediment to MAR applications, where oxygenated water is recharged into anoxic aquifers, is the potential mobilization of trace metals (e.g., arsenic). While conceptual models for arsenic transport under such circumstances exist, they are generally not rigorously evaluated through numerical modeling, especially at field-scale. In this work, geochemical data from an injection experiment in The Netherlands, where the introduction of oxygenated water into an anoxic aquifer mobilized arsenic, was used to develop and evaluate conceptual and numerical models of arsenic release and attenuation under field-scale conditions. Initially, a groundwater flow and nonreactive transport model was developed. Subsequent reactive transport simulations focused on the description of the temporal and spatial evolution of the redox zonation. The calibrated model was then used to study and quantify the transport of arsenic. In the model that best reproduced field observations, the fate of arsenic was simulated by (i) release via codissolution of arsenopyrite, stoichiometrically linked to pyrite oxidation, (ii) kinetically controlled oxidation of dissolved As(III) to As(V), and (iii) As adsorption via surface complexation on neo-precipitated iron oxides.

  8. Analysis of Earthquake Catalogs for CSEP Testing Region Italy

    International Nuclear Information System (INIS)

    Peresan, A.; Romashkova, L.; Nekrasova, A.; Kossobokov, V.; Panza, G.F.

    2010-07-01

    A comprehensive analysis shows that the set of catalogs provided by the Istituto Nazionale di Geofisica e Vulcanologia (INGV, Italy) as the authoritative database for the Collaboratory for the Study of Earthquake Predictability - Testing Region Italy (CSEP-TRI), is hardly a unified one acceptable for the necessary tuning of models/algorithms, as well as for running rigorous prospective predictability tests at intermediate- or long-term scale. (author)

  9. A numerical library in Java for scientists and engineers

    CERN Document Server

    Lau, Hang T

    2003-01-01

    At last researchers have an inexpensive library of Java-based numeric procedures for use in scientific computation. The first and only book of its kind, A Numeric Library in Java for Scientists and Engineers is a translation into Java of the library NUMAL (NUMerical procedures in ALgol 60). This groundbreaking text presents procedural descriptions for linear algebra, ordinary and partial differential equations, optimization, parameter estimation, mathematical physics, and other tools that are indispensable to any dynamic research group. The book offers test programs that allow researchers to execute the examples provided; users are free to construct their own tests and apply the numeric procedures to them in order to observe a successful computation or simulate failure. The entry for each procedure is logically presented, with name, usage parameters, and Java code included. This handbook serves as a powerful research tool, enabling the performance of critical computations in Java. It stands as a cost-effi...

  10. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    Science.gov (United States)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  11. Experimental verification of numerical calculations of railway passenger seats

    Science.gov (United States)

    Ligaj, B.; Wirwicki, M.; Karolewska, K.; Jasińska, A.

    2018-04-01

    The construction of railway seats is based on industry regulations and the requirements of end users, i.e. passengers. The two main documents in this context are the UIC 566 (3rd Edition, dated 7 January 1994) and the EN 12663-1: 2010+A1:2014. The study was to carry out static load tests of passenger seat frames. The paper presents the construction of the test bench and the results of experimental and numerical studies of passenger seat rail frames. The test bench consists of a frame, a transverse beam, two electric cylinders with a force value of 6 kN, and a strain gauge amplifier. It has a modular structure that allows for its expansion depending on the structure of the seats. Comparing experimental results with numerical results for points A and B allowed to determine the existing differences. It follows from it that higher stress values are obtained by numerical calculations in the range of 0.2 MPa to 35.9 MPa.

  12. A Multi-Verse Optimizer with Levy Flights for Numerical Optimization and Its Application in Test Scheduling for Network-on-Chip.

    Directory of Open Access Journals (Sweden)

    Cong Hu

    Full Text Available We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO, which incorporates Levy flights into multi-verse optimizer (MVO algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC. Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.

  13. Numerical methods in simulation of resistance welding

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi

    2015-01-01

    Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...

  14. A study into first-year engineering education success using a rigorous mixed methods approach

    DEFF Research Database (Denmark)

    van den Bogaard, M.E.D.; de Graaff, Erik; Verbraek, Alexander

    2015-01-01

    The aim of this paper is to combine qualitative and quantitative research methods into rigorous research into student success. Research methods have weaknesses that can be overcome by clever combinations. In this paper we use a situated study into student success as an example of how methods...... using statistical techniques. The main elements of the model were student behaviour and student disposition, which were influenced by the students’ perceptions of the education environment. The outcomes of the qualitative studies were useful in interpreting the outcomes of the structural equation...

  15. Numerical precision control and GRACE

    International Nuclear Information System (INIS)

    Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.

    2006-01-01

    The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time

  16. The development of an Infrared Environmental System for TOPEX Solar Panel Testing

    Science.gov (United States)

    Noller, E.

    1994-01-01

    Environmental testing and flight qualification of the TOPEX/POSEIDON spacecraft solar panels were performed with infrared (IR) lamps and a control system that were newly designed and integrated. The basic goal was more rigorous testing of the costly panels' new composite-structure design without jeopardizing their safety. The technique greatly reduces the costs and high risks of testing flight solar panels.

  17. PRO development: rigorous qualitative research as the crucial foundation.

    Science.gov (United States)

    Lasch, Kathryn Eilene; Marquis, Patrick; Vigneux, Marc; Abetz, Linda; Arnould, Benoit; Bayliss, Martha; Crawford, Bruce; Rosa, Kathleen

    2010-10-01

    Recently published articles have described criteria to assess qualitative research in the health field in general, but very few articles have delineated qualitative methods to be used in the development of Patient-Reported Outcomes (PROs). In fact, how PROs are developed with subject input through focus groups and interviews has been given relatively short shrift in the PRO literature when compared to the plethora of quantitative articles on the psychometric properties of PROs. If documented at all, most PRO validation articles give little for the reader to evaluate the content validity of the measures and the credibility and trustworthiness of the methods used to develop them. Increasingly, however, scientists and authorities want to be assured that PRO items and scales have meaning and relevance to subjects. This article was developed by an international, interdisciplinary group of psychologists, psychometricians, regulatory experts, a physician, and a sociologist. It presents rigorous and appropriate qualitative research methods for developing PROs with content validity. The approach described combines an overarching phenomenological theoretical framework with grounded theory data collection and analysis methods to yield PRO items and scales that have content validity.

  18. Paulo Leminski : um estudo sobre o rigor e o relaxo em suas poesias

    OpenAIRE

    Dhynarte de Borba e Albuquerque

    2005-01-01

    O trabalho examina a trajetória da poesia de Paulo Leminski, buscando estabelecer os termos do humor, da pesquisa metalingüística e do eu-lírico, e que não deixa de exibir traços da poesia marginal dos 70. Um autor que trabalhou com a busca do rigor concretista mediante os procedimentos da fala cotidiana mais ou menos relaxada. O esforço poético do curitibano Leminski é uma “linha que nunca termina” – ele escreveu poesias, romances, peças de publicidade, letras de música e fez traduções. Em t...

  19. Numerical homogenization on approach for stokesian suspensions.

    Energy Technology Data Exchange (ETDEWEB)

    Haines, B. M.; Berlyand, L. V.; Karpeev, D. A. (Mathematics and Computer Science); (Department of Mathematics, Pennsylvania State Univ.)

    2012-01-20

    swimming resulting from bacterial alignment can significantly alter other macroscopic properties of the suspension, such as the oxygen diffusivity and mixing rates. In order to understand the unique macroscopic properties of active suspensions the connection between microscopic swimming and alignment dynamics and the mesoscopic pattern formation must be clarified. This is difficult to do analytically in the fully general setting of moderately dense suspensions, because of the large number of bacteria involved (approx. 10{sup 10} cm{sup -3} in experiments) and the complex, time-dependent geometry of the system. Many reduced analytical models of bacterial have been proposed, but all of them require validation. While comparison with experiment is the ultimate test of a model's fidelity, it is difficult to conduct experiments matched to these models assumptions. Numerical simulation of the microscopic dynamics is an acceptable substitute, but it runs into the problem of having to discretize the fluid domain with a fine-grained boundary (the bacteria) and update the discretization as the domain evolves (bacteria move). This leads to a prohibitively high number of degrees of freedom and prohibitively high setup costs per timestep of simulation. In this technical report we propose numerical methods designed to alleviate these two difficulties. We indicate how to (1) construct an optimal discretization in terms of the number of degrees of freedom per digit of accuracy and (2) optimally update the discretization as the simulation evolves. The technical tool here is the derivation of rigorous error bounds on the error in the numerical solution when using our proposed discretization at the initial time as well as after a given elapsed simulation time. These error bounds should guide the construction of practical discretization schemes and update strategies. Our initial construction is carried out by using a theoretically convenient, but practically prohibitive spectral basis

  20. Derivation of basic equations for rigorous dynamic simulation of cryogenic distillation column for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Kinoshita, Masahiro; Naruse, Yuji

    1981-08-01

    The basic equations are derived for rigorous dynamic simulation of cryogenic distillation columns for hydrogen isotope separation. The model accounts for such factors as differences in latent heat of vaporization among the six isotopic species of molecular hydrogen, decay heat of tritium, heat transfer through the column wall and nonideality of the solutions. Provision is also made for simulation of columns with multiple feeds and multiple sidestreams. (author)

  1. Numerical model of phase transformation of steel C80U during hardening

    Directory of Open Access Journals (Sweden)

    T. Domański

    2007-12-01

    Full Text Available The article concerns numerical modelling of the phase transformations in solid state hardening of tool steel C80U. The transformations were assumed: initial structure – austenite, austenite – perlite, bainite and austenite – martensite. Model for evaluation of fractions of phases and their kinetics based on continuous heating diagram (CHT and continuous cooling diagram (CCT. The dilatometric tests on the simulator of thermal cycles were performed. The results of dilatometric tests were compared with the results of the test numerical simulations. In this way the derived models for evaluating phase content and kinetics of transformations in heating and cooling processes were verified. The results of numerical simulations confirm correctness of the algorithm that were worked out. In the numerical example the simulated estimation of the phase fraction in the hardened axisimmetrical element was performed.

  2. Implicit and semi-implicit schemes in the Versatile Advection Code : numerical tests

    NARCIS (Netherlands)

    Tóth, G.; Keppens, R.; Bochev, Mikhail A.

    1998-01-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing

  3. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  4. Assessment of the Methodological Rigor of Case Studies in the Field of Management Accounting Published in Journals in Brazil

    Directory of Open Access Journals (Sweden)

    Kelly Cristina Mucio Marques

    2015-04-01

    Full Text Available This study aims to assess the methodological rigor of case studies in management accounting published in Brazilian journals. The study is descriptive. The data were collected using documentary research and content analysis, and 180 papers published from 2008 to 2012 in accounting journals rated as A2, B1, and B2 that were classified as case studies were selected. Based on the literature, we established a set of 15 criteria that we expected to be identified (either explicitly or implicitly in the case studies to classify those case studies as appropriate from the standpoint of methodological rigor. These criteria were partially met by the papers analyzed. The aspects less aligned with those proposed in the literature were the following: little emphasis on justifying the need to understand phenomena in context; lack of explanation of the reason for choosing the case study strategy; the predominant use of questions that do not enable deeper analysis; many studies based on only one source of evidence; little use of data and information triangulation; little emphasis on the data collection method; a high number of cases in which confusion between case study as a research strategy and as data collection method were detected; a low number of papers reporting the method of data analysis; few reports on a study's contributions; and a minority highlighting the issues requiring further research. In conclusion, the method used to apply case studies to management accounting must be improved because few studies showed rigorous application of the procedures that this strategy requires.

  5. Numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1982-01-01

    There are many recent developments in numerical relativity, but there remain important unsolved theoretical and practical problems. The author reviews existing numerical approaches to solution of the exact Einstein equations. A framework for classification and comparison of different numerical schemes is presented. Recent numerical codes are compared using this framework. The discussion focuses on new developments and on currently open questions, excluding a review of numerical techniques. (Auth.)

  6. Experimental and numerical study of the MYRRHA control rod system dynamics

    International Nuclear Information System (INIS)

    Kennedy, G.; Lamberts, D.; Van Tichelen, K.; Profir, M.; Moreau, V.

    2017-01-01

    This paper presents an experimental and numerical investigation of the buoyancy driven MYRRHA control rod (CR) insertion during an emergency SCRAM. The study aimed to support the MYRRHA reactor design and characterise the hydrodynamic behaviour of the CR system while demonstrating the proof-of-principle. A full-scale mock-up test section of the MYRRHA CR was constructed to test the hydrodynamics in Lead Bismuth Eutectic over a wide range of operating conditions, to provide experimental data for the qualification of the CR system. A numerical CFD model of the CR test section was also setup in STAR-CCM+. The simulations make use of the recently developed overset mesh method to simulate the dynamic two-way coupling between the moving CR bundle and the fluid domain. The numerical methodology and post-test simulation results are validated against the experimental results. The steady state hydraulic results and the transient insertion results from both the experimental and numerical efforts are presented. The influence of the global process conditions on the CR insertion time are presented as well. This investigation successfully demonstrates the CR insertion proof-of-principle during a SCRAM. (author)

  7. Invalid Permutation Tests

    Directory of Open Access Journals (Sweden)

    Mikel Aickin

    2010-01-01

    Full Text Available Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

  8. Association Between Maximal Skin Dose and Breast Brachytherapy Outcome: A Proposal for More Rigorous Dosimetric Constraints

    International Nuclear Information System (INIS)

    Cuttino, Laurie W.; Heffernan, Jill; Vera, Robyn; Rosu, Mihaela; Ramakrishnan, V. Ramesh; Arthur, Douglas W.

    2011-01-01

    Purpose: Multiple investigations have used the skin distance as a surrogate for the skin dose and have shown that distances 4.05 Gy/fraction. Conclusion: The initial skin dose recommendations have been based on safe use and the avoidance of significant toxicity. The results from the present study have suggested that patients might further benefit if more rigorous constraints were applied and if the skin dose were limited to 120% of the prescription dose.

  9. Numerical simulation of superelastic shape memory alloys subjected to dynamic loads

    International Nuclear Information System (INIS)

    Cismaşiu, Corneliu; Amarante dos Santos, Filipe P

    2008-01-01

    Superelasticity, a unique property of shape memory alloys (SMAs), allows the material to recover after withstanding large deformations. This recovery takes place without any residual strains, while dissipating a considerable amount of energy. This property makes SMAs particularly suitable for applications in vibration control devices. Numerical models, calibrated with experimental laboratory tests from the literature, are used to investigate the dynamic response of three vibration control devices, built up of austenitic superelastic wires. The energy dissipation and re-centering capabilities, important features of these devices, are clearly illustrated by the numerical tests. Their sensitivity to ambient temperature and strain rate is also addressed. Finally, one of these devices is tested as a seismic passive vibration control system in a simplified numerical model of a railway viaduct, subjected to different ground accelerations

  10. Numerical and experimental analysis of a horizontal ground-coupled heat pump system

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, University of Firat, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, University of Firat, 23119 Elazig (Turkey)

    2007-03-15

    The main objective of this work is to evaluate a heat pump system using the ground as a source of heat. A ground-coupled heat pump (GCHP) system has been installed and tested at the test room, University of Firat, Elazig, Turkey. Results obtained during experimental testing are presented and discussed here. The coefficient of performance (COP{sub sys}) of the GCHP system is determined from the measured data. A numerical model of heat transfer in the ground was developed for determining the temperature distribution in the vicinity of the pipe. The finite difference approximation is used for numerical analysis. It is observed that the numerical results agree with the experimental results. (author) (author)

  11. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  12. Rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets

    International Nuclear Information System (INIS)

    Yang, Z.R.

    1993-10-01

    We have exactly calculated the rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets (SC's) by means of graph expansion and a combinatorial approach and investigated the asymptotic behaviour in the limit of long distance. The result show there is no long range correlation between spins at any finite temperature which indicates no existence of phase transition and thus finally confirms the conclusion produced by the renormalization group method and other physical arguments. (author). 7 refs, 6 figs

  13. Numerical design and test on an assembled structure of a bolted joint with viscoelastic damping

    Science.gov (United States)

    Hammami, Chaima; Balmes, Etienne; Guskov, Mikhail

    2016-03-01

    Mechanical assemblies are subjected to many dynamic loads and modifications are often needed to achieve acceptable vibration levels. While modifications on mass and stiffness are well mastered, damping modifications are still considered difficult to design. The paper presents a case study on the design of a bolted connection containing a viscoelastic damping layer. The notion of junction coupling level is introduced to ensure that sufficient energy is present in the joints to allow damping. Static performance is then addressed and it is shown that localization of metallic contact can be used to meet objectives, while allowing the presence of viscoelastic materials. Numerical prediction of damping then illustrates difficulties in optimizing for robustness. Modal test results of three configurations of an assembled structure, inspired by aeronautic fuselages, are then compared to analyze the performance of the design. While validity of the approach is confirmed, the effect of geometric imperfections is shown and stresses the need for robust design.

  14. Valuing goodwill: not-for-profits prepare for annual impairment testing.

    Science.gov (United States)

    Heuer, Christian; Travers, Mary Ann K

    2011-02-01

    Accounting standards for valuing goodwill and intangible assets are becoming more rigorous for not-for-profit organizations: Not-for-profit healthcare organizations need to test for goodwill impairment at least annually. Impairment testing is a two-stage process: initial analysis to determine whether impairment exists and subsequent calculation of the magnitude of impairment. Certain "triggering" events compel all organizations--whether for-profit or not-for-profit--to perform an impairment test for goodwill or intangible assets.

  15. Increasing rigor in NMR-based metabolomics through validated and open source tools.

    Science.gov (United States)

    Eghbalnia, Hamid R; Romero, Pedro R; Westler, William M; Baskaran, Kumaran; Ulrich, Eldon L; Markley, John L

    2017-02-01

    The metabolome, the collection of small molecules associated with an organism, is a growing subject of inquiry, with the data utilized for data-intensive systems biology, disease diagnostics, biomarker discovery, and the broader characterization of small molecules in mixtures. Owing to their close proximity to the functional endpoints that govern an organism's phenotype, metabolites are highly informative about functional states. The field of metabolomics identifies and quantifies endogenous and exogenous metabolites in biological samples. Information acquired from nuclear magnetic spectroscopy (NMR), mass spectrometry (MS), and the published literature, as processed by statistical approaches, are driving increasingly wider applications of metabolomics. This review focuses on the role of databases and software tools in advancing the rigor, robustness, reproducibility, and validation of metabolomics studies. Copyright © 2016. Published by Elsevier Ltd.

  16. Cost evaluation of cellulase enzyme for industrial-scale cellulosic ethanol production based on rigorous Aspen Plus modeling.

    Science.gov (United States)

    Liu, Gang; Zhang, Jian; Bao, Jie

    2016-01-01

    Cost reduction on cellulase enzyme usage has been the central effort in the commercialization of fuel ethanol production from lignocellulose biomass. Therefore, establishing an accurate evaluation method on cellulase enzyme cost is crucially important to support the health development of the future biorefinery industry. Currently, the cellulase cost evaluation methods were complicated and various controversial or even conflict results were presented. To give a reliable evaluation on this important topic, a rigorous analysis based on the Aspen Plus flowsheet simulation in the commercial scale ethanol plant was proposed in this study. The minimum ethanol selling price (MESP) was used as the indicator to show the impacts of varying enzyme supply modes, enzyme prices, process parameters, as well as enzyme loading on the enzyme cost. The results reveal that the enzyme cost drives the cellulosic ethanol price below the minimum profit point when the enzyme is purchased from the current industrial enzyme market. An innovative production of cellulase enzyme such as on-site enzyme production should be explored and tested in the industrial scale to yield an economically sound enzyme supply for the future cellulosic ethanol production.

  17. Numerical relativity

    CERN Document Server

    Shibata, Masaru

    2016-01-01

    This book is composed of two parts: First part describes basics in numerical relativity, that is, the formulations and methods for a solution of Einstein's equation and general relativistic matter field equations. This part will be helpful for beginners of numerical relativity who would like to understand the content of numerical relativity and its background. The second part focuses on the application of numerical relativity. A wide variety of scientific numerical results are introduced focusing in particular on the merger of binary neutron stars and black holes.

  18. Numerical analysis

    CERN Document Server

    Khabaza, I M

    1960-01-01

    Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput

  19. Release of major ions during rigor mortis development in kid Longissimus dorsi muscle.

    Science.gov (United States)

    Feidt, C; Brun-Bellut, J

    1999-01-01

    Ionic strength plays an important role in post mortem muscle changes. Its increase is due to ion release during the development of rigor mortis. Twelve alpine kids were used to study the effects of chilling and meat pH on ion release. Free ions were measured in Longissimus dorsi muscle by capillary electrophoresis after water extraction. All free ion concentrations increased after death, but there were differences between ions. Temperature was not a factor affecting ion release in contrast to ultimate pH value. Three release mechanisms are believed to coexist: a passive binding to proteins, which stops as pH decreases, an active segregation which stops as ATP disappears and the production of metabolites due to anaerobic glycolysis.

  20. Children’s Mapping between Non-Symbolic and Symbolic Numerical Magnitudes and Its Association with Timed and Untimed Tests of Mathematics Achievement

    Science.gov (United States)

    Brankaer, Carmen; Ghesquière, Pol; De Smedt, Bert

    2014-01-01

    The ability to map between non-symbolic numerical magnitudes and Arabic numerals has been put forward as a key factor in children’s mathematical development. This mapping ability has been mainly examined indirectly by looking at children’s performance on a symbolic magnitude comparison task. The present study investigated mapping in a more direct way by using a task in which children had to choose which of two choice quantities (Arabic digits or dot arrays) matched the target quantity (dot array or Arabic digit), thereby focusing on small quantities ranging from 1 to 9. We aimed to determine the development of mapping over time and its relation to mathematics achievement. Participants were 36 first graders (M = 6 years 8 months) and 46 third graders (M = 8 years 8 months) who all completed mapping tasks, symbolic and non-symbolic magnitude comparison tasks and standardized timed and untimed tests of mathematics achievement. Findings revealed that children are able to map between non-symbolic and symbolic representations and that this mapping ability develops over time. Moreover, we found that children’s mapping ability is related to timed and untimed measures of mathematics achievement, over and above the variance accounted for by their numerical magnitude comparison skills. PMID:24699664

  1. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  2. Quasilocal variables in spherical symmetry: Numerical applications to dark matter and dark energy sources

    International Nuclear Information System (INIS)

    Sussman, Roberto A.

    2009-01-01

    A numerical approach is considered for spherically symmetric spacetimes that generalize Lemaitre-Tolman-Bondi dust solutions to nonzero pressure ('LTB spacetimes'). We introduce quasilocal (QL) variables that are covariant LTB objects satisfying evolution equations of Friedman-Lemaitre-Robertson-Walker (FLRW) cosmologies. We prove rigorously that relative deviations of the local covariant scalars from the QL scalars are nonlinear, gauge invariant and covariant perturbations on a FLRW formal background given by the QL scalars. The dynamics of LTB spacetimes is completely determined by the QL scalars and these exact perturbations. Since LTB spacetimes are compatible with a wide variety of ''equations of state,'' either single fluids or mixtures, a large number of known solutions with dark matter and dark energy sources in a FLRW framework (or with linear perturbations) can be readily examined under idealized but nontrivial inhomogeneous conditions. Coordinate choices and initial conditions are derived for a numerical treatment of the perturbation equations, allowing us to study nonlinear effects in a variety of phenomena, such as gravitational collapse, nonlocal effects, void formation, dark matter and dark energy couplings, and particle creation. In particular, the embedding of inhomogeneous regions can be performed by a smooth matching with a suitable FLRW solution, thus generalizing the Newtonian 'top hat' models that are widely used in astrophysical literature. As examples of the application of the formalism, we examine numerically the formation of a black hole in an expanding Chaplygin gas FLRW universe, as well as the evolution of density clumps and voids in an interactive mixture of cold dark matter and dark energy.

  3. Hypnotherapy and Test Anxiety: Two Cognitive-Behavioral Constructs. The Effects of Hypnosis in Reducing Test Anxiety and Improving Academic Achievement in College Students.

    Science.gov (United States)

    Sapp, Marty

    A two-group randomized multivariate analysis of covariance (MANCOVA) was used to investigate the effects of cognitive-behavioral hypnosis in reducing test anxiety and improving academic performance in comparison to a Hawthorne control group. Subjects were enrolled in a rigorous introductory psychology course which covered an entire text in one…

  4. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Directory of Open Access Journals (Sweden)

    Hoľko Michal

    2014-12-01

    Full Text Available The article deals with numerical analyses of a Continuous Flight Auger (CFA pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed.

  5. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  6. Computational Flame Diagnostics for Direct Numerical Simulations with Detailed Chemistry of Transportation Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Tianfeng [Univ. of Connecticut, Storrs, CT (United States)

    2017-02-16

    The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily based on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.

  7. A rigorous derivation of gravitational self-force

    International Nuclear Information System (INIS)

    Gralla, Samuel E; Wald, Robert M

    2008-01-01

    There is general agreement that the MiSaTaQuWa equations should describe the motion of a 'small body' in general relativity, taking into account the leading order self-force effects. However, previous derivations of these equations have made a number of ad hoc assumptions and/or contain a number of unsatisfactory features. For example, all previous derivations have invoked, without proper justification, the step of 'Lorenz gauge relaxation', wherein the linearized Einstein equation is written in the form appropriate to the Lorenz gauge, but the Lorenz gauge condition is then not imposed-thereby making the resulting equations for the metric perturbation inequivalent to the linearized Einstein equations. (Such a 'relaxation' of the linearized Einstein equations is essential in order to avoid the conclusion that 'point particles' move on geodesics.) In this paper, we analyze the issue of 'particle motion' in general relativity in a systematic and rigorous way by considering a one-parameter family of metrics, g ab (λ), corresponding to having a body (or black hole) that is 'scaled down' to zero size and mass in an appropriate manner. We prove that the limiting worldline of such a one-parameter family must be a geodesic of the background metric, g ab (λ = 0). Gravitational self-force-as well as the force due to coupling of the spin of the body to curvature-then arises as a first-order perturbative correction in λ to this worldline. No assumptions are made in our analysis apart from the smoothness and limit properties of the one-parameter family of metrics, g ab (λ). Our approach should provide a framework for systematically calculating higher order corrections to gravitational self-force, including higher multipole effects, although we do not attempt to go beyond first-order calculations here. The status of the MiSaTaQuWa equations is explained

  8. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  9. Boundary integral equation methods and numerical solutions thin plates on an elastic foundation

    CERN Document Server

    Constanda, Christian; Hamill, William

    2016-01-01

    This book presents and explains a general, efficient, and elegant method for solving the Dirichlet, Neumann, and Robin boundary value problems for the extensional deformation of a thin plate on an elastic foundation. The solutions of these problems are obtained both analytically—by means of direct and indirect boundary integral equation methods (BIEMs)—and numerically, through the application of a boundary element technique. The text discusses the methodology for constructing a BIEM, deriving all the attending mathematical properties with full rigor. The model investigated in the book can serve as a template for the study of any linear elliptic two-dimensional problem with constant coefficients. The representation of the solution in terms of single-layer and double-layer potentials is pivotal in the development of a BIEM, which, in turn, forms the basis for the second part of the book, where approximate solutions are computed with a high degree of accuracy. The book is intended for graduate students and r...

  10. Rigorous derivation of the perimeter generating functions for the mean-squared radius of gyration of rectangular, Ferrers and pyramid polygons

    International Nuclear Information System (INIS)

    Lin, Keh Ying

    2006-01-01

    We have derived rigorously the perimeter generating functions for the mean-squared radius of gyration of rectangular, Ferrers and pyramid polygons. These functions were found by Jensen recently. His nonrigorous results are based on the analysis of the long series expansions. (comment)

  11. Rigorous decoupling between edge states in frustrated spin chains and ladders

    Science.gov (United States)

    Chepiga, Natalia; Mila, Frédéric

    2018-05-01

    We investigate the occurrence of exact zero modes in one-dimensional quantum magnets of finite length that possess edge states. Building on conclusions first reached in the context of the spin-1/2 X Y chain in a field and then for the spin-1 J1-J2 Heisenberg model, we show that the development of incommensurate correlations in the bulk invariably leads to oscillations in the sign of the coupling between edge states, and hence to exact zero energy modes at the crossing points where the coupling between the edge states rigorously vanishes. This is true regardless of the origin of the frustration (e.g., next-nearest-neighbor coupling or biquadratic coupling for the spin-1 chain), of the value of the bulk spin (we report on spin-1/2, spin-1, and spin-2 examples), and of the value of the edge-state emergent spin (spin-1/2 or spin-1).

  12. Rigorous constraints on the matrix elements of the energy–momentum tensor

    Directory of Open Access Journals (Sweden)

    Peter Lowdon

    2017-11-01

    Full Text Available The structure of the matrix elements of the energy–momentum tensor play an important role in determining the properties of the form factors A(q2, B(q2 and C(q2 which appear in the Lorentz covariant decomposition of the matrix elements. In this paper we apply a rigorous frame-independent distributional-matching approach to the matrix elements of the Poincaré generators in order to derive constraints on these form factors as q→0. In contrast to the literature, we explicitly demonstrate that the vanishing of the anomalous gravitomagnetic moment B(0 and the condition A(0=1 are independent of one another, and that these constraints are not related to the specific properties or conservation of the individual Poincaré generators themselves, but are in fact a consequence of the physical on-shell requirement of the states in the matrix elements and the manner in which these states transform under Poincaré transformations.

  13. Standards for Radiation Effects Testing: Ensuring Scientific Rigor in the Face of Budget Realities and Modern Device Challenges

    Science.gov (United States)

    Lauenstein, J M.

    2015-01-01

    An overview is presented of the space radiation environment and its effects on electrical, electronic, and electromechanical parts. Relevant test standards and guidelines are listed. Test standards and guidelines are necessary to ensure best practices, minimize and bound systematic and random errors, and to ensure comparable results from different testers and vendors. Test standards are by their nature static but exist in a dynamic environment of advancing technology and radiation effects research. New technologies, failure mechanisms, and advancement in our understanding of known failure mechanisms drive the revision or development of test standards. Changes to standards must be weighed against their impact on cost and existing part qualifications. There must be consensus on new best practices. The complexity of some new technologies exceeds the scope of existing test standards and may require development of a guideline specific to the technology. Examples are given to illuminate the value and limitations of key radiation test standards as well as the challenges in keeping these standards up to date.

  14. An efficient and rigorous thermodynamic library and optimal-control of a cryogenic air separation unit

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp

    2017-01-01

    -linear model based control to achieve optimal techno-economic performance. Accordingly, this work presents a computationally efficient and novel approach for solving a tray-by-tray equilibrium model and its implementation for open-loop optimal-control of a cryogenic distillation column. Here, the optimisation...... objective is to reduce the cost of compression in a volatile electricity market while meeting the production requirements, i.e. product flow rate and purity. This model is implemented in Matlab and uses the ThermoLib rigorous thermodynamic library. The present work represents a first step towards plant...

  15. Numerical solution of special ultra-relativistic Euler equations using central upwind scheme

    Science.gov (United States)

    Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul

    2018-06-01

    This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.

  16. Microscopic assessment of bone toughness using scratch tests

    Directory of Open Access Journals (Sweden)

    Amrita Kataruka

    2017-06-01

    Full Text Available Bone is a composite material with five distinct structural levels: collagen molecules, mineralized collagen fibrils, lamellae, osteon and whole bone. However, most fracture testing methods have been limited to the macroscopic scale and there is a need for advanced characterization methods to assess toughness at the osteon level and below. The goal of this investigation is to present a novel framework to measure the fracture properties of bone at the microscopic scale using scratch testing. A rigorous experimental protocol is articulated and applied to examine cortical bone specimens from porcine femurs. The observed fracture behavior is very complex: we observe a strong anisotropy of the response with toughening mechanisms and a competition between plastic flow and brittle fracture. The challenge consists then in applying nonlinear fracture mechanics methods such as the J-integral or the energetic Size Effect Law to quantify the fracture toughness in a rigorous fashion. Our result suggests that mixed-mode fracture is instrumental in determining the fracture resistance. There is also a pronounced coupling between fracture and elasticity. Our methodology opens the door to fracture assessment at multiple structural levels, microscopic and potentially nanometer length scale, due to the scalability of scratch tests.

  17. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  18. Description of comprehensive pump test change to ASME OM code, subsection ISTB

    International Nuclear Information System (INIS)

    Hartley, R.S.

    1994-01-01

    The American Society of Mechanical Engineers (ASME) Operations and Maintenance (OM) Main Committee and Board on Nuclear Codes and Standards (BNCS) recently approved changes to ASME OM Code-1990, Subsection ISTB, Inservice Testing of Pumps in Light-Water Reactor Power Plants. The changes will be included in the 1994 addenda to ISTB. The changes, designated as the comprehensive pump test, incorporate a new, improved philosophy for testing safety-related pumps in nuclear power plants. An important philosophical difference between the open-quotes old codeclose quotes inservice testing (IST) requirements and these changes is that the changes concentrate on less frequent, more meaningful testing while minimizing damaging and uninformative low-flow testing. The comprehensive pump test change establishes a more involved biannual test for all pumps and significantly reduces the rigor of the quarterly test for standby pumps. The increased rigor and cost of the biannual comprehensive tests are offset by the reduced cost of testing and potential damage to the standby pumps, which comprise a large portion of the safety-related pumps at most plants. This paper provides background on the pump testing requirements, discusses potential industry benefits of the change, describes the development of the comprehensive pump test, and gives examples and reasons for many of the specific changes. This paper also describes additional changes to ISTB that will be included in the 1994 addenda that are associated with, but not part of, the comprehensive pump test

  19. Early rigorous control interventions can largely reduce dengue outbreak magnitude: experience from Chaozhou, China.

    Science.gov (United States)

    Liu, Tao; Zhu, Guanghu; He, Jianfeng; Song, Tie; Zhang, Meng; Lin, Hualiang; Xiao, Jianpeng; Zeng, Weilin; Li, Xing; Li, Zhihao; Xie, Runsheng; Zhong, Haojie; Wu, Xiaocheng; Hu, Wenbiao; Zhang, Yonghui; Ma, Wenjun

    2017-08-02

    Dengue fever is a severe public heath challenge in south China. A dengue outbreak was reported in Chaozhou city, China in 2015. Intensified interventions were implemented by the government to control the epidemic. However, it is still unknown the degree to which intensified control measures reduced the size of the epidemics, and when should such measures be initiated to reduce the risk of large dengue outbreaks developing? We selected Xiangqiao district as study setting because the majority of the indigenous cases (90.6%) in Chaozhou city were from this district. The numbers of daily indigenous dengue cases in 2015 were collected through the national infectious diseases and vectors surveillance system, and daily Breteau Index (BI) data were reported by local public health department. We used a compartmental dynamic SEIR (Susceptible, Exposed, Infected and Removed) model to assess the effectiveness of control interventions, and evaluate the control effect of intervention timing on dengue epidemic. A total of 1250 indigenous dengue cases was reported from Xiangqiao district. The results of SEIR modeling using BI as an indicator of actual control interventions showed a total of 1255 dengue cases, which is close to the reported number (n = 1250). The size and duration of the outbreak were highly sensitive to the intensity and timing of interventions. The more rigorous and earlier the control interventions implemented, the more effective it yielded. Even if the interventions were initiated several weeks after the onset of the dengue outbreak, the interventions were shown to greatly impact the prevalence and duration of dengue outbreak. This study suggests that early implementation of rigorous dengue interventions can effectively reduce the epidemic size and shorten the epidemic duration.

  20. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…