WorldWideScience

Sample records for average procedures applied

  1. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  2. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  3. Effects of Video-Based and Applied Problems on the Procedural Math Skills of Average- and Low-Achieving Adolescents.

    Science.gov (United States)

    Bottge, Brian A.; Heinrichs, Mary; Chan, Shih-Yi; Mehta, Zara Dee; Watson, Elizabeth

    2003-01-01

    This study examined effects of video-based, anchored instruction and applied problems on the ability of 11 low-achieving (LA) and 26 average-achieving (AA) eighth graders to solve computation and word problems. Performance for both groups was higher during anchored instruction than during baseline, but no differences were found between instruction…

  4. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  5. Effects of measurement procedure and equipment on average room acoustic measurements

    DEFF Research Database (Denmark)

    Gade, Anders Christian; Bradley, J S; Siebein, G W

    1993-01-01

    This paper reports the results of a measurement tour of nine U.S. concert halls. Three measurements teams, from the University of Florida, the National Research Council of Canada, and the Technical University of Denmark, made parallel sets of measurements using their own equipment and procedures........ [Work partially supported by the Concert Hall Research Group.]......This paper reports the results of a measurement tour of nine U.S. concert halls. Three measurements teams, from the University of Florida, the National Research Council of Canada, and the Technical University of Denmark, made parallel sets of measurements using their own equipment and procedures....... In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences observed...

  6. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  7. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  8. Nevada Applied Ecology Group procedures handbook for environmental transuranics

    International Nuclear Information System (INIS)

    White, M.G.; Dunaway, P.B.

    1976-10-01

    The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and other biological material. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications more complex than the routine standard sampling procedures utilized

  9. Nevada Applied Ecology Group procedures handbook for environmental transuranics

    International Nuclear Information System (INIS)

    White, M.G.; Dunaway, P.B.

    1976-10-01

    The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and others. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications perhaps more complex than the routine standard sampling procedures utilized

  10. Time averaging procedure for calculating the mass and energy transfer rates in adiabatic two phase flow

    International Nuclear Information System (INIS)

    Boccaccini, L.V.

    1986-07-01

    To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de

  11. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  12. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  13. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure.

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  14. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  15. 34 CFR 370.43 - What requirement applies to the use of mediation procedures?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures are...

  16. 20 CFR 668.860 - What cash management procedures apply to INA grant funds?

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What cash management procedures apply to INA... Administrative Requirements § 668.860 What cash management procedures apply to INA grant funds? INA grantees must... implement the Cash Management Improvement Act, found at 31 CFR part 205, apply by law to most recipients of...

  17. 12 CFR 516.5 - Do the same procedures apply to all applications under this part?

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Do the same procedures apply to all applications under this part? 516.5 Section 516.5 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPLICATION PROCESSING PROCEDURES § 516.5 Do the same procedures apply to all...

  18. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  19. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  20. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  1. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT

    International Nuclear Information System (INIS)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-01-01

    In this document the quality control procedures applied to the CMS muon drift chambers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chamber test handbook for beginners. (Author) 3 refs

  2. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  3. Comparative analysis of lockout programs and procedures applied to industrial machines

    Energy Technology Data Exchange (ETDEWEB)

    Chinniah, Y.; Champoux, M.; Burlet-Vienney, D.; Daigle, R. [Institut de recherche Robert-Sauve en sante et en securite du travail, Montreal, PQ (Canada)

    2008-09-15

    In 2005, approximately 20 workers in Quebec were killed by dangerous machines. Approximately 13,000 accidents in the province were linked to the use of machines. The resulting cost associated with these accidents was estimated to be $70 million to the Quebec Occupational Health and Safety Commission (CSST) in compensation and salary replacement. According to article 185 of the Quebec Occupational Health and Safety Regulation (RSST), workers intervening in hazardous zones of machines and processes during maintenance, repairs, and unjamming activities must apply lockout procedures. Lockout is defined as the placement of a lock or tag on an energy-isolating device in accordance with an established procedure, indicating that the energy-isolating device is not to be operated until removal of the lock or tag in accordance with an established procedure. This report presented a comparative analysis of lockout programs and procedures applied to industrial machines. The study attempted to answer several questions regarding the concept of lockout and its definition in the literature; the differences between legal lockout requirements among provinces and countries; different standards on lockout; the contents of lockout programs as described by different documents; and the compliance of lockout programs in a sample of industries in Quebec in terms of Canadian standard on lockout, the CSA Z460-05 (2005). The report discussed the research objectives, methodology, and results of the study. It was concluded that the concept of lockout has different meanings or definitions in the literature, especially in regulations. However, definitions of lockout which are found in standards have certain similarities. 50 refs., 52 tabs., 2 appendices.

  4. 18 CFR 284.502 - Procedures for applying for market-based rates.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Procedures for applying for market-based rates. 284.502 Section 284.502 Conservation of Power and Water Resources FEDERAL... POLICY ACT OF 1978 AND RELATED AUTHORITIES Applications for Market-Based Rates for Storage § 284.502...

  5. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false What procedures apply to the selection of programs... Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL SCIENCE FOUNDATION PROGRAMS AND ACTIVITIES § 660.6 What procedures apply to the selection of programs and activities...

  6. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  7. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  8. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    Science.gov (United States)

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  9. Procedure manual for the estimation of average indoor radon-daughter concentrations using the filtered alpha-track method

    International Nuclear Information System (INIS)

    George, J.L.

    1988-04-01

    One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs

  10. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  11. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    Science.gov (United States)

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  13. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  14. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2008-01-01

    textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is

  15. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  16. Applied field test procedures on petroleum release sites

    International Nuclear Information System (INIS)

    Gilbert, G.; Nichols, L.

    1995-01-01

    The effective remediation of petroleum contaminated soils and ground water is a significant issue for Williams Pipe Line Co. (Williams): costing $6.8 million in 1994. It is in the best interest, then, for Williams to adopt approaches and apply technologies that will be both cost-effective and comply with regulations. Williams has found the use of soil vapor extraction (SVE) and air sparging (AS) field test procedures at the onset of a petroleum release investigation/remediation accomplish these goals. This paper focuses on the application of AS/SVE as the preferred technology to a specific type of remediation: refined petroleum products. In situ field tests are used prior to designing a full-scale remedial system to first validate or disprove initial assumptions on applicability of the technology. During the field test, remedial system design parameters are also collected to tailor the design and operation of a full-scale system to site specific conditions: minimizing cost and optimizing effectiveness. In situ field tests should be designed and operated to simulate as close as possible the operation of a full-scale remedial system. The procedures of an in situ field test will be presented. The results of numerous field tests and the associated costs will also be evaluated and compared to full-scale remedial systems and total project costs to demonstrate overall effectiveness. There are many advantages of As/SVE technologies over conventional fluid extraction or SVE systems alone. However, the primary advantage is the ability to simultaneously reduce volatile and biodegradable compound concentrations in the phreatic, capillary fringe, and unsaturated zones

  17. Applying computer-based procedures in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro V. de; Carvalho, Paulo V.R. de; Santos, Isaac J.A.L. dos; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana], e-mail: mvitor@ien.gov.br, e-mail: paulov@ien.gov.br, e-mail: luquetti@ien.gov.br, e-mail: grecco@ien.gov.br; Bruno, Diego S. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Curso de Engenharia de Controle e Automacao], e-mail: diegosalomonebruno@gmail.com

    2009-07-01

    Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)

  18. 14 CFR 382.127 - What procedures apply to stowage of battery-powered mobility aids?

    Science.gov (United States)

    2010-01-01

    ...-powered mobility aids? 382.127 Section 382.127 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT... DISABILITY IN AIR TRAVEL Stowage of Wheelchairs, Other Mobility Aids, and Other Assistive Devices § 382.127 What procedures apply to stowage of battery-powered mobility aids? (a) Whenever baggage compartment...

  19. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  20. 24 CFR 1000.54 - What procedures apply to complaints arising out of any of the methods of providing for Indian...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What procedures apply to complaints arising out of any of the methods of providing for Indian preference? 1000.54 Section 1000.54 Housing and... ACTIVITIES General § 1000.54 What procedures apply to complaints arising out of any of the methods of...

  1. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9.

    Science.gov (United States)

    Usta, Murat; Aral, Hale; Mete Çilingirtürk, Ahmet; Kural, Alev; Topaç, Ibrahim; Semerci, Tuna; Hicri Köseoğlu, Mehmet

    2016-11-01

    Average of normals (AON) is a quality control procedure that is sensitive only to systematic errors that can occur in an analytical process in which patient test results are used. The aim of this study was to develop an alternative model in order to apply the AON quality control procedure to datasets that include qualitative values below limit of detection (LoD). The reported patient test results for tumor markers, such as CA 15-3, CA 125, and CA 19-9, analyzed by two instruments, were retrieved from the information system over a period of 5 months, using the calibrator and control materials with the same lot numbers. The median as a measure of central tendency and the median absolute deviation (MAD) as a measure of dispersion were used for the complementary model of AON quality control procedure. The u bias values, which were determined for the bias component of the measurement uncertainty, were partially linked to the percentages of the daily median values of the test results that fall within the control limits. The results for these tumor markers, in which lower limits of reference intervals are not medically important for clinical diagnosis and management, showed that the AON quality control procedure, using the MAD around the median, can be applied for datasets including qualitative values below LoD.

  2. Evaluation of the content of neutral detergent fiber and acid through different procedures applied to forage plants

    Directory of Open Access Journals (Sweden)

    Luiz Juliano Valério Geron

    2014-06-01

    Full Text Available This study aimed to evaluate the content of neutral detergent fiber (NDF and acid (FDA of Marandu, Humidicula; Massai; Mombaça grass obtained by three different procedures with the conventional method (CON, the Filter Bag Technique Ankom® (FBT and adapted by EMBRAPA (EMB. The forage plants were obtained in the Southwest region of Mato Grosso in the September; the samples were collected from pastures planted the three years. There were three batteries (runs for each procedure (method for the determination of NDF and ADF of different forages. There were realised three batteries (runs for each procedure (method for the determination of NDF and ADF of different forages. We used a completely randomized design with three replications for each methodology. Statistical analysis of the variables studied was performed by analysis of variance and the differences obtained were analyzed by Tukey test, considering 5% significance level. It was observed that the different procedures (CON, FBT and EMB to determine the content NDF did not differ between them for different forages plants, with average values of 78.61% and 74.96% for the Marandu and Humidicula grass, respectively and 76.78% and 73.08% for Massai and Mombaça grass, respectively. The ADF content obtained by different procedures (CON, FBT and EMB showed no difference between them, with average values of 50.95% and 44.86% for Marandu and Humidicula grass, respectively and 52.21% and 51.56% for Massai and Mombaça grass, respectively. Thus, it is concluded that the content of neutral detergent fiber and acid Marandu, Humidicula; Massai and Mombaça grass can be determined by the procedure adapted by EMBRAPA, since this was not different compared to conventional methodology and the Filter Bag Technique Ankom®, besides having a lower waste of reagents and consequently lower cost.

  3. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  4. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    International Nuclear Information System (INIS)

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia

    2009-01-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  5. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  6. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  7. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    Science.gov (United States)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  8. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  9. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  10. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  11. Containment integrity and leak testing. Procedures applied and experiences gained in European countries

    International Nuclear Information System (INIS)

    1987-01-01

    Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries

  12. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  13. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    Science.gov (United States)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  14. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  15. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  16. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  17. 28 CFR 30.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 30.6 Section 30.6 Judicial Administration DEPARTMENT OF... consult with local elected officials. (b) Each state that adopts a process shall notify the Attorney...

  18. 25 CFR 900.58 - Do the same accountability and control procedures described above apply to Federal property?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Do the same accountability and control procedures described above apply to Federal property? 900.58 Section 900.58 Indians BUREAU OF INDIAN AFFAIRS... Organization Management Systems Property Management System Standards § 900.58 Do the same accountability and...

  19. 49 CFR 17.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures apply to the selection of programs and activities under these regulations? 17.6 Section 17.6 Transportation Office of the Secretary of Transportation INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF TRANSPORTATION PROGRAMS AND ACTIVITIES § 17.6 What...

  20. Validation procedures of software applied in nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    2007-09-01

    The IAEA has supported the availability of well functioning nuclear instruments in Member States over more than three decades. Some older or aged instruments are still being used and are still in good working condition. However, those instruments may not meet modern software requirements for the end-user in all cases. Therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. New advanced software is not only applied in case of new instrumentation, but often also for new and improved applications of modernized and/or refurbished instruments in many Member States for which in few cases the IAEA also provided support. Modern software applied in nuclear instrumentation plays a key role for their safe operation and execution of commands in a user friendly manner. Correct data handling and transfer has to be ensured. Additional features such as data visualization, interfacing to PC for control and data storage are often included. To finalize the task, where new instrumentation which is not commercially available is used, or aged instruments are modernized/refurbished, the applied software has to be verified and validated. A Technical Meeting on 'Validation Procedures of Software Applied in Nuclear Instruments' was organized in Vienna, 20-23 November 2006, to discuss the verification and validation process of software applied to operation and use of nuclear instruments. The presentations at the technical meeting included valuable information, which has been compiled and summarized in this publication, which should be useful for technical staff in Member States when modernizing/refurbishing nuclear instruments. 22 experts in the field of modernization/refurbishment of nuclear instruments as well as users of applied software presented their latest results. Discussion sessions followed the presentations. This publication is the outcome of deliberations during the meeting

  1. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  2. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  3. Quality control procedures applied to nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    2008-11-01

    Quality Control (QC), test procedures for Nuclear Instrumentation are important for assurance of proper and safe operation of the instruments, especially with regard to equipment related to radiological safety, human health and national safety. Correct measurements of radiation parameters must be ensured, i.e., accurate measurement of the number of radioactive events, counting times and in some cases accurate measurements of the radiation energy and occurring time of the nuclear events. There are several kinds of testing on nuclear instruments, for example, type-testing done by suppliers, acceptance testing made by the end users, Quality Control tests after repair and Quality Assurance/Quality Controls tests made by end-users. All of these tests are based in many cases on practical guidelines or on the experience of the own specialist, the available standards on this topic also need to be adapted to specific instruments. The IAEA has provided nuclear instruments and supported the operational maintenance efforts of the Member States. Although Nuclear Instrumentation is continuously upgraded, some older or aged instruments are still in use and in good working condition. Some of these instruments may not, however, meet modern requirements for the end-user therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. As a result, new instrumentation which is not commercially available, or modernized/refurbished instruments, need to be tested or verified with QC procedures to meet national or international certification requirements. A technical meeting on QC procedures applied to nuclear instruments was organized in Vienna from 23 to 24 August 2007. Existing and required QC test procedures necessary for the verification of operation and measurement of the main characteristics of nuclear instruments was the focus of discussion at this meeting. Presentations made at the technical meeting provided

  4. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  5. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  6. The Safety Assessment of OPR-1000 for Station Blackout Applying Combined Deterministic and Probabilistic Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dong Gu; Ahn, Seung-Hoon; Cho, Dae-Hyung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    This is termed station blackout (SBO). However, it does not generally include the loss of available AC power to safety buses fed by station batteries through inverters or by alternate AC sources. Historically, risk analysis results have indicated that SBO was a significant contributor to overall core damage frequency. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident, which is a typical beyond design basis accident and important contributor to overall plant risk, is performed by applying the combined deterministic and probabilistic procedure (CDPP). In addition, discussions are made for reevaluation of SBO risk at OPR-1000 by eliminating excessive conservatism in existing PSA. The safety assessment of OPR-1000 for SBO accident, which is a typical BDBA and significant contributor to overall plant risk, was performed by applying the combined deterministic and probabilistic procedure. However, the reference analysis showed that the CDF and CCDP did not meet the acceptable risk, and it was confirmed that the SBO risk should be reevaluated. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it was demonstrated that the proposed CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.

  7. Assessment of specific absorbed fractions for photons and electrons using average adult Japanese female phantom

    International Nuclear Information System (INIS)

    Manabe, Kentaro; Sato, Kaoru; Takahashi, Fumiaki

    2016-12-01

    In the 2007 Recommendations of the International Commission on Radiological Protection (ICRP), the procedure for calculating effective doses was modified as follows. Equivalent doses are evaluated using the male and female voxel phantoms on the basis of reference anatomical data of Caucasians, and effective doses are calculated using sex-averaged equivalent doses in applying tissue weighting factors. Specific absorbed fractions (SAFs), which are essential data for calculating internal doses, depend on the body weights, organ masses, and positional relations of organs of the phantoms. Then, the dose coefficients, which are committed effective doses per unit intake of radionuclides, developed by ICRP on the basis of the 2007 Recommendations reflect the physical characteristics of Caucasians and are averaged over the sexes. Meanwhile, the physiques of adult Japanese are generally smaller than those of adult Caucasians, and organ masses are also different from each other. Therefore, dose coefficients reflecting Japanese physical characteristics are different from those of ICRP. Knowledge of the influence of race differences on dose coefficients is important to apply the sex averaged dose coefficients of ICRP to the Japanese system of radiation protection. SAF data based on phantoms which have Japanese physical characteristics is essential for assessment of the dose coefficients reflecting Japanese physical characteristics. The Japan Atomic Energy Agency constructed average adult Japanese phantoms, JM-103 (male) and JF-103 (female), and is developing a dose estimation method for internal exposure using these phantoms. This report provides photon and electron SAFs of JF-103. The data of this report and the previously published data of JM-103 are applicable to evaluate sex-specific and sex-averaged dose coefficients reflecting the physical characteristics of the average adult Japanese for intakes of radionuclides emitting photons and electrons. Appendix as CD-ROM. (J.P.N.)

  8. Fluxes by eddy correlation over heterogeneous landscape: How shall we apply the Reynolds average?

    Science.gov (United States)

    Dobosy, R.

    2007-12-01

    tilde denotes the departure of base-state≠(2) from base-state≠(1). It represents surface≠C's characteristic bias. The equation is defined only over class≠C. A similar equation applies to each surface class. The first and second righthand terms express interaction of the departure quantities with surface≠C's characteristic bias. These terms are zero if the base states are simple means. The third term becomes important if class C has a significant bias both in vertical motion and in its characteristic values of a. A practical example from 2005 June 18 at 1015 LST in Illinois is illustrative. Turbulence measurements were made by aircraft at 20≠m above ground along a 50≠km track approximately evenly divided between corn and soybean. Corn (type≠C) was growing quickly, increasing the mixing ratio of moisture (r) and reducing that of CO2 (a), relative to soybean. Soybean characteristically heated the air and favored updrafts. These biases were evident in r¯*)C, a¯*)C, θ¯*)C, and w¯*)C relative to their corresponding averages over soybean. In particular the bias in CO2 mixing ratio, negative over corn and positive over soybean, was about 20% of the standard deviation of a*. Nevertheless, neither surface type strongly favored vertical motion, giving the encouraging result that the two approaches do not differ by more than an insignificant few per cent. The theoretical analysis indicates care, however, where extensive areas of both bare soil and vegetated land may enhance the bias in vertical motion between different components of the landscape.

  9. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  10. Estimating the average grain size of metals - approved standard 1969

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    These methods cover procedures for estimating and rules for expressing the average grain size of all metals and consisting entirely, or principally, of a single phase. The methods may also be used for any structures having appearances similar to those of the metallic structures shown in the comparison charts. The three basic procedures for grain size estimation which are discussed are comparison procedure, intercept (or Heyn) procedure, and planimetric (or Jeffries) procedure. For specimens consisting of equiaxed grains, the method of comparing the specimen with a standard chart is most convenient and is sufficiently accurate for most commercial purposes. For high degrees of accuracy in estimating grain size, the intercept or planimetric procedures may be used

  11. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  12. THE ASSESSMENT OF CORPORATE BONDS ON THE BASIS OF THE WEIGHTED AVERAGE

    Directory of Open Access Journals (Sweden)

    Victor V. Prokhorov

    2014-01-01

    Full Text Available The article considers the problem associated with the assessment of the interest rate of a public corporate bond issue. The theme of research is the study of techniques for evaluationof interest rates of corporate bond. The article discusses the task of developing a methodology for assessing the marketinterest rate of corporate bonded loan, which allows to takeinto account the systematic and specific risks. The technique of evaluation of market interest rates of corporate bonds onthe basis of weighted averages is proposed. This procedure uses in the calculation of cumulative barrier interest rate, sectoral weighted average interest rate and the interest ratedetermined on the basis of the model CAPM (Capital Asset Pricing Model. The results, which enable to speak about the possibility of applying the proposed methodology for assessing the market interest rate of a public corporate bond issuein the Russian conditions. The results may be applicable for Russian industrial enterprises, organizing issue public bonds,as well as investment companies exposed organizers of corporate securities loans and other organizations specializingin investments in the Russian public corporate bond loans.

  13. Photogrammetry procedures applied to anthropometry.

    Science.gov (United States)

    Okimoto, Maria Lúcialeite Ribeiro; Klein, Alison Alfred

    2012-01-01

    This study aims to evaluate the reliability and establish procedures for the use of digital photogrammetry in anthropometric measurements of the human hand. The methodology included the construction of a platform to allow the placement of the hand always equivalent to a distance of the camera lens and to annul the effects of parallax. We developed a software to perform the measurements from the images and built up a subject of proof in a cast from a negative mold, this object was subjected to measurements with digital photogrammetry using the data collection platform in caliper and the Coordinate Measuring Machine (MMC). The results of the application of photogrammetry in the data collection segment hand, allow us to conclude that photogrammetry is an effective presenting precision coefficient below 0.940. Within normal and acceptable values, given the magnitude of the data used in anthropometry. It was concluded photogrammetry then be reliable, accurate and efficient for carrying out anthropometric surveys of population, and presents less difficulty to collect in-place.

  14. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  15. 21 CFR 1.383 - What expedited procedures apply when FDA initiates a seizure action against a detained perishable...

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false What expedited procedures apply when FDA initiates a seizure action against a detained perishable food? 1.383 Section 1.383 Food and Drugs FOOD AND... Administrative Detention of Food for Human or Animal Consumption General Provisions § 1.383 What expedited...

  16. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  17. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent

  18. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  19. Applying industrial engineering practices to radiology.

    Science.gov (United States)

    Rosen, Len

    2004-01-01

    Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to

  20. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  1. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  2. Procedures for analyzing the effectiveness of siren systems for alerting the public

    International Nuclear Information System (INIS)

    Keast, D.N.; Towers, D.A.; Anderson, G.S.; Kenoyer, J.L.; Desrosiers, A.E.

    1982-09-01

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations

  3. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  4. Trends and the determination of effective doses for standard X-ray procedures

    International Nuclear Information System (INIS)

    Johnson, H.M.; Neduzak, C.; Gallet, J.; Sandeman, J.

    2001-01-01

    Trends in the entrance skin exposures (air kerma) for standard x-ray imaging procedures are reported for the Province of Manitoba, Canada. Average annual data per procedure using standard phantoms and standard ion chambers have been recorded since 1981. For example, chest air kerma (backscatter included) has decreased from 0.14 to 0.09 mGy. Confounding factors may negate the gains unless facility quality control programs are maintained. The data were obtained for a quality assurance and regulatory compliance program. Quoting such data for risk evaluation purposes lacks rigor hence a compartment model for organ apportioning, using organ absorbed doses and weighting factors, has been applied to determine effective dose per procedure. The effective doses for the standard procedures are presented, including the value of 0.027 mSv (1999) calculated for the effective dose in PA chest imaging. (author)

  5. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  6. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  7. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  8. Quantization Procedures

    International Nuclear Information System (INIS)

    Cabrera, J. A.; Martin, R.

    1976-01-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

  9. Creating an Implicit Measure of Cognition More Suited to Applied Research: A Test of the Mixed Trial-Implicit Relational Assessment Procedure (MT-IRAP)

    Science.gov (United States)

    Levin, Michael E.; Hayes, Steven C.; Waltz, Thomas

    2010-01-01

    The Implicit Relational Assessment Procedure (IRAP) is a promising tool for measuring implicit cognitions in applied research. However, the need for training and block effects can limit its capacity to assess effects with individual stimuli and participants, both of which are important for applied research. We developed a modified IRAP, the Mixed…

  10. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  11. Recent advances in applied research on DRO procedures.

    Science.gov (United States)

    Jessel, Joshua; Ingvarsson, Einar T

    2016-12-01

    Differential reinforcement of other behavior (DRO) has been applied to reduce problem behavior in various forms across different populations. We review DRO research from the last 5 years, with a focus on studies that enhance our understanding of the underlying mechanisms of DRO. We also discuss implications for practitioners and applied researchers. © 2016 Society for the Experimental Analysis of Behavior.

  12. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  13. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  14. Microwave-assisted versus conventional decomposition procedures applied to a ceramic potsherd standard reference material by inductively coupled plasma atomic emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Papadopoulou, D.N.; Zachariadis, G.A.; Anthemidis, A.N.; Tsirliganis, N.C.; Stratis, J.A

    2004-03-03

    Inductively coupled plasma atomic emission spectrometry (ICP-AES) is a powerful, sensitive analytical technique with numerous applications in chemical characterization including that of ancient pottery, mainly due to its multi-element character, and the relatively short time required for the analysis. A critical step in characterization studies of ancient pottery is the selection of a suitable decomposition procedure for the ceramic matrix. The current work presents the results of a comparative study of six decomposition procedures applied on a standard ceramic potsherd reference material, SARM 69. The investigated decomposition procedures included three microwave-assisted decomposition procedures, one wet decomposition (WD) procedure by conventional heating, one combined microwave-assisted and conventional heating WD procedure, and one fusion procedure. Chemical analysis was carried out by ICP-AES. Five major (Si, Al, Fe, Ca, Mg), three minor (Mn, Ba, Ti) and two trace (Cu, Co) elements were determined and compared with their certified values. Quantitation was performed at two different spectral lines for each element and multi-element matrix-matched calibration standards were used. The recovery values for the six decomposition procedures ranged between 75 and 110% with a few notable exceptions. Data were processed statistically in order to evaluate the investigated decomposition procedures in terms of recovery, accuracy and precision, and eventually select the most appropriate one for ancient pottery analysis.

  15. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  16. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  17. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  18. Human factoring administrative procedures

    International Nuclear Information System (INIS)

    Grider, D.A.; Sturdivant, M.H.

    1991-01-01

    In nonnuclear business, administrative procedures bring to mind such mundane topics as filing correspondence and scheduling vacation time. In the nuclear industry, on the other hand, administrative procedures play a vital role in assuring the safe operation of a facility. For some time now, industry focus has been on improving technical procedures. Significant efforts are under way to produce technical procedure requires that a validated technical, regulatory, and administrative basis be developed and that the technical process be established for each procedure. Producing usable technical procedures requires that procedure presentation be engineered to the same human factors principles used in control room design. The vital safety role of administrative procedures requires that they be just as sound, just a rigorously formulated, and documented as technical procedures. Procedure programs at the Tennessee Valley Authority and at Boston Edison's Pilgrim Station demonstrate that human factors engineering techniques can be applied effectively to technical procedures. With a few modifications, those same techniques can be used to produce more effective administrative procedures. Efforts are under way at the US Department of Energy Nuclear Weapons Complex and at some utilities (Boston Edison, for instance) to apply human factors engineering to administrative procedures: The techniques being adapted include the following

  19. 77 FR 7237 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2012-02-10

    ... Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY: In a decision served on... productivity for the 2006-2010 (5-year) averaging period. This represents a 0.6% decrease over the average for...

  20. 78 FR 10262 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2013-02-13

    ... Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY: In a decision served on... productivity for the 2007-2011 (5-year) averaging period. This represents a 0.1% increase over the average for...

  1. Some implications of batch average burnup calculations on predicted spent fuel compositions

    International Nuclear Information System (INIS)

    Alexander, C.W.; Croff, A.G.

    1984-01-01

    The accuracy of using batch-averaged burnups to determine spent fuel characteristics (such as isotopic composition, activity, etc.) was examined for a typical pressurized-water reactor (PWR) fuel discharge batch by comparing characteristics computed by (a) performing a single depletion calculation using the average burnup of the spent fuel and (b) performing separate depletion calculations based on the relative amounts of spent fuel in each of twelve burnup ranges and summing the results. The computations were done using ORIGEN 2. Procedure (b) showed a significant shift toward a greater quantity of the heavier transuranics, which derive from multiple neutron captures, and a corresponding decrease in the amounts of lower transuranics. Those characteristics which derive primarily from fission products, such as total radioactivity and total thermal power, are essentially identical for the two procedures. Those characteristics that derive primarily from the heavier transuranics, such as spontaneous fission neutrons, are underestimated by procedure (a)

  2. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  3. Decision-making Procedures

    DEFF Research Database (Denmark)

    Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher

    2009-01-01

    define procedures as mechanisms that influence the probabilities of reaching different endnodes. We show that for such procedural games a sequential psychological equilibrium always exists. Applying this approach within a principal-agent context we show that the way less attractive jobs are allocated...

  4. A fiber orientation-adapted integration scheme for computing the hyperelastic Tucker average for short fiber reinforced composites

    Science.gov (United States)

    Goldberg, Niels; Ospald, Felix; Schneider, Matti

    2017-10-01

    In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.

  5. Statistical near-real-time accountancy procedures applied to AGNS [Allied General Nuclear Services] minirun data using PROSA

    International Nuclear Information System (INIS)

    Beedgen, R.

    1988-03-01

    The computer program PROSA (PROgram for Statistical Analysis of near-real-time accountancy data) was developed as a tool to apply statistical test procedures to a sequence of materials balance results for detecting losses of material. First applications of PROSA to model facility data and real plant data showed that PROSA is also usable as a tool for process or measurement control. To deepen the experience for the application of PROSA to real data of bulk-handling facilities, we applied it to uranium data of the Allied General Nuclear Services miniruns, where accountancy data were collected on a near-real-time basis. Minirun 6 especially was considered, and the pulsed columns were chosen as materials balance area. The structure of the measurement models for flow sheet data and actual operation data are compared, and methods are studied to reduce the error for inventory measurements of the columns

  6. Radiation exposure of operator during various interventional procedures

    International Nuclear Information System (INIS)

    Yu, In Kyu; Chung, Jin Wook; Han, Joon Koo; Park, Jae Hyung; Kang, Wee Saing

    1994-01-01

    To investigate the levels of radiation exposure of an operator which may be influenced by the wearing an apron, type of procedure, duration of fluoroscopy and operator's skill during various interventional procedures. Radiation doses were measured both inside and outside the apron(0.5 mm lead equivalent) of the operator by a film badge monitoring method and the duration of fluoroscopy was measured in 96 procedures prospectively. The procedures were 30 transcatheter arterial embolization (TAE), 25 percutaneous transhepatic biliary drainages (PTBD), 16 stone removals (SR), 15 percutaneous needle aspirations (PNCA) and 10 percutaneous nephrostomies(PCN). To assess the difference of exposure by the operator's skill, the procedures of TAE and PTBD were done separately by groups of staffs and residents. Average protective effect of the apron was 72.8%. Average radiation exposure(unit: μ Sv/procedure was 23.3 in PTBD by residents, 10.0 in PTBD by staffs, 10.0 in SR, 8.7 in TAE by residents, 7.3 in TAE by staffs, 9.0 in PCN and 6.0 in PCNA. Average radiation exposure of residents were 1.9 times greater than those of staffs. Radiation exposure was not proportionally related to the duration of fluoroscopy, but influenced by wearing an apron, various types of procedure and operator's skills

  7. The heterogeneous response method applied to couple the average pin cell and bulk moderator in cluster geometry

    International Nuclear Information System (INIS)

    Lerner, A.M.

    1986-01-01

    The first step towards evaluation of the neutron flux throughout a fuel cluster usually consists of obtaining the multigroup flux distribution in the average pin cell and in the circular outside system of shroud and bulk moderator. Here, an application of the so-called heterogeneous response method (HRM) is described to find this multigroup flux. The rather complex geometry is reduced to a microsystem, the average pin cell, and the outside or macrosystem of shroud and bulk moderator. In each of these systems, collision probabilities are used to obtain their response fluxes caused by sources and in-currents. The two systems are then coupled by cosine currents across that fraction of the average pin-cell boundary, called 'window', that represents the average common boundary between pin cells and the outside system. (author)

  8. 40 CFR Appendix B to Part 76 - Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers B Appendix B to Part 76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES...

  9. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  10. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  11. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  12. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  13. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  14. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  15. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  16. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  17. Stricture location predicts swallowing outcomes following endoscopic rendezvous procedures.

    Science.gov (United States)

    Adams, Katherine N; Shah, Rupali N; Buckmire, Robert A

    2017-06-01

    Complete pharyngoesophageal strictures may be encountered by the otolaryngologist as a consequence of radiation/chemoradiotherapy therapies for head and neck cancer. A combined anterograde and retrograde dilation procedure (rendezvous procedure) has proven to be a useful surgical intervention in these cases. We assess the long-term swallowing outcomes of this patient cohort including gastrostomy tube (G-tube) reliance, swallowing quality of life, and variables that contribute to improved swallowing outcomes. Retrospective chart review. A retrospective chart review of 18 consecutive patients treated with rendezvous procedures between April 2007 and May 2015 was carried out. Data were collected from chart review and follow-up telephone calls including demographics, surgical/postoperative course details, and Eating Assessment Tool (EAT-10) (swallowing quality of life) scores. The completion rate of the procedure was 83% (15 completed/3 procedures aborted). Average follow-up was 22 months. Thirteen of 15 (86.7%) achieved an oral diet, and 7/15 (46.7%) had their G-tube removed. G-tube-independent (GTI) patients had an average stricture length of 2.33 cm and an average distance from the incisors of 17.4 cm compared to G-tube dependent-(GTD) patients who had an average stricture length of 2.63 cm and 14.6 cm mean distance from the incisors (P = .66 and .0343, respectively). Final EAT-10 scores averaged 20.1 in GTI patients and 33.8 in GTD patients (P = .022). Stricture/incisor distance and EAT-10 scores demonstrated a moderate to strong negative correlation (r = -0.67). Following the endoscopic rendezvous procedure, swallowing outcomes and G-tube status is related to the distance of the stricture from the incisors. 2b Laryngoscope, 127:1388-1391, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  18. Effects of different etching methods and bonding procedures on shear bond strength of orthodontic metal brackets applied to different CAD/CAM ceramic materials.

    Science.gov (United States)

    Buyuk, S Kutalmış; Kucukekenci, Ahmet Serkan

    2018-03-01

    To investigate the shear bond strength (SBS) of orthodontic metal brackets applied to different types of ceramic surfaces treated with different etching procedures and bonding agents. Monolithic CAD/CAM ceramic specimens (N = 120; n = 40 each group) of feldspathic ceramic Vita Mark II, resin nanoceramic Lava Ultimate, and hybrid ceramic Vita Enamic were fabricated (14 × 12 × 3 mm). Ceramic specimens were separated into four subgroups (n = 10) according to type of surface treatment and bonding onto the ceramic surface. Within each group, four subgroups were prepared by phosphoric acid, hydrofluoric acid, Transbond XT primer, and Clearfill Ceramic primer. Mandibular central incisor metal brackets were bonded with light-cure composite. The SBS data were analyzed using three-way analysis of variance (ANOVA) and Tukey HSD tests. The highest SBS was found in the Vita Enamic group, which is a hybrid ceramic, etched with hydrofluoric acid and applied Transbond XT Adhesive primer (7.28 ± 2.49 MPa). The lowest SBS was found in the Lava Ultimate group, which is a resin nano-ceramic etched with hydrofluoric acid and applied Clearfill ceramic primer (2.20 ± 1.21 MPa). CAD/CAM material types and bonding procedures affected bond strength ( P .05). The use of Transbond XT as a primer bonding agent resulted in higher SBS.

  19. Radiation exposure to staff and patients during two endocrinological procedures

    International Nuclear Information System (INIS)

    Hristova-Popova, J.; Vassileva, J.; Saltirov, I.; Petkova, K.

    2012-01-01

    The purpose of the present work is to obtain information about the exposure to patient and staff during percutaneous nephrolithotripsy and ureteroscopy with intracorporeal lithotripsy and to search for a correlation between these parameters. The collected data for each procedure consist of the total air kerma-area product, P KA , cumulative dose, CD, fluoroscopy time, FT, number of images acquired, as well as clinical patient data. Average, minimum, maximum and median values were calculated for 38 patients. Mean values and median in parentheses were as follows: 355 (383) cGy cm 2 (P KA for PCNL); 433 (286) cGy cm 2 (P KA for URS); 42 (37) mGy (CD for PCNL); 12 (7) mGy (CD for URS); 3.5 (3.0) min (FT for PCNL); 1.4 (1.3) min (FT for URS). The typical operator doses for PCNL and URS were assessed to be 66.1 μSv and 34.3 μSv, respectively, while the maximum doses for the same type of procedures were 152.6 μSv and 124.1 μSv. Good correlation was observed between the staff dose and P KA for both procedures, while the correlation of staff dose with CD and FT was found to be weak. While applying principles of radiation protection and normal load in the clinic, there is no possibility to exceed the new annual dose limit for eye lens of 20 mSv per year averaged over 5 years. The correlation of P KA with FT and CD was also explored and no significant interconnection was observed. (authors)

  20. Automation of the testing procedure

    International Nuclear Information System (INIS)

    Haas, H.; Fleischer, M.; Bachner, E.

    1979-01-01

    For the judgement of technologies applied and the testing of specific components of the HTR primary circuit, complex test procedures and data evaluations are required. Extensive automation of these test procedures is indispensable. (orig.) [de

  1. Impact of backwashing procedures on deep bed filtration productivity in drinking water treatment.

    Science.gov (United States)

    Slavik, Irene; Jehmlich, Alexander; Uhl, Wolfgang

    2013-10-15

    Backwash procedures for deep bed filters were evaluated and compared by means of a new integrated approach based on productivity. For this, different backwash procedures were experimentally evaluated by using a pilot plant for direct filtration. A standard backwash mode as applied in practice served as a reference and effluent turbidity was used as the criterion for filter run termination. The backwash water volumes needed, duration of the filter-to-waste period, time out of operation, total volume discharged and filter run-time were determined and used to calculate average filtration velocity and average productivity. Results for filter run-times, filter backwash volumes, and filter-to-waste volumes showed considerable differences between the backwash procedures. Thus, backwash procedures with additional clear flushing phases were characterised by an increased need for backwash water. However, this additional water consumption could not be compensated by savings during filter ripening. Compared to the reference backwash procedure, filter run-times were longer for both single-media and dual-media filters when air scour and air/water flush were optimised with respect to flow rates and the proportion of air and water. This means that drinking water production time is longer and less water is needed for filter bed cleaning. Also, backwashing with additional clear flushing phases resulted in longer filter run-times before turbidity breakthrough. However, regarding the productivity of the filtration process, it was shown that it was almost the same for all of the backwash procedures investigated in this study. Due to this unexpected finding, the relationships between filter bed cleaning, filter ripening and filtration performance were considered and important conclusions and new approaches for process optimisation and resource savings were derived. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Genesis of theory and analysis of practice of applying the analytical procedures in auditing

    OpenAIRE

    Сурніна, К. С.

    2012-01-01

    Determination of concept "Analytical procedures" in an audit by different researchers is investigated in the article, ownvision of necessity of wideuse of analytical procedures in audit is defined. Classification of analytical procedures is presentedtaking into account the specifity of auditing process on the whole

  3. Applying a Systemic Procedure to Locate Career Decision-Making Difficulties

    Science.gov (United States)

    Gati, Itamar; Amir, Tamar

    2010-01-01

    Locating clients' career decision-making difficulties is one of the first steps in career counseling. The authors demonstrate the feasibility and utility of a systematic 4-stage procedure for locating and interpreting career decision-making difficulties by analyzing responses of 626 college students (collected by Tai, 2007) to the Career…

  4. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  5. Readability of Invasive Procedure Consent Forms.

    Science.gov (United States)

    Eltorai, Adam E M; Naqvi, Syed S; Ghanian, Soha; Eberson, Craig P; Weiss, Arnold-Peter C; Born, Christopher T; Daniels, Alan H

    2015-12-01

    Informed consent is a pillar of ethical medicine which requires patients to fully comprehend relevant issues including the risks, benefits, and alternatives of an intervention. Given the average reading skill of US adults is at the 8th grade level, the American Medical Association (AMA) and the National Institutes of Health (NIH) recommend patient information materials should not exceed a 6th grade reading level. We hypothesized that text provided in invasive procedure consent forms would exceed recommended readability guidelines for medical information. To test this hypothesis, we gathered procedure consent forms from all surgical inpatient hospitals in the state of Rhode Island. For each consent form, readability analysis was measured with the following measures: Flesch Reading Ease Formula, Flesch-Kincaid Grade Level, Fog Scale, SMOG Index, Coleman-Liau Index, Automated Readability Index, and Linsear Write Formula. These readability scores were used to calculate a composite Text Readability Consensus Grade Level. Invasive procedure consent forms were found to be written at an average of 15th grade level (i.e., third year of college), which is significantly higher than the average US adult reading level of 8th grade (p readability guidelines for patient materials of 6th grade (p readability levels which makes comprehension difficult or impossible for many patients. Efforts to improve the readability of procedural consent forms should improve patient understanding regarding their healthcare decisions. © 2015 Wiley Periodicals, Inc.

  6. Text mining applied to electronic cardiovascular procedure reports to identify patients with trileaflet aortic stenosis and coronary artery disease.

    Science.gov (United States)

    Small, Aeron M; Kiss, Daniel H; Zlatsin, Yevgeny; Birtwell, David L; Williams, Heather; Guerraty, Marie A; Han, Yuchi; Anwaruddin, Saif; Holmes, John H; Chirinos, Julio A; Wilensky, Robert L; Giri, Jay; Rader, Daniel J

    2017-08-01

    Interrogation of the electronic health record (EHR) using billing codes as a surrogate for diagnoses of interest has been widely used for clinical research. However, the accuracy of this methodology is variable, as it reflects billing codes rather than severity of disease, and depends on the disease and the accuracy of the coding practitioner. Systematic application of text mining to the EHR has had variable success for the detection of cardiovascular phenotypes. We hypothesize that the application of text mining algorithms to cardiovascular procedure reports may be a superior method to identify patients with cardiovascular conditions of interest. We adapted the Oracle product Endeca, which utilizes text mining to identify terms of interest from a NoSQL-like database, for purposes of searching cardiovascular procedure reports and termed the tool "PennSeek". We imported 282,569 echocardiography reports representing 81,164 individuals and 27,205 cardiac catheterization reports representing 14,567 individuals from non-searchable databases into PennSeek. We then applied clinical criteria to these reports in PennSeek to identify patients with trileaflet aortic stenosis (TAS) and coronary artery disease (CAD). Accuracy of patient identification by text mining through PennSeek was compared with ICD-9 billing codes. Text mining identified 7115 patients with TAS and 9247 patients with CAD. ICD-9 codes identified 8272 patients with TAS and 6913 patients with CAD. 4346 patients with AS and 6024 patients with CAD were identified by both approaches. A randomly selected sample of 200-250 patients uniquely identified by text mining was compared with 200-250 patients uniquely identified by billing codes for both diseases. We demonstrate that text mining was superior, with a positive predictive value (PPV) of 0.95 compared to 0.53 by ICD-9 for TAS, and a PPV of 0.97 compared to 0.86 for CAD. These results highlight the superiority of text mining algorithms applied to electronic

  7. Radiochromic film for dosimetric measurements in radiation shielding composites synthesized for applied in radiology procedures of high dose

    Energy Technology Data Exchange (ETDEWEB)

    Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)

  8. Vibrational Averaging of the Isotropic Hyperfine Coupling Constants for the Methyl Radical

    Science.gov (United States)

    Adam, Ahmad; Jensen, Per; Yachmenev, Andrey; Yurchenko, Sergei N.

    2014-06-01

    Electronic contributions to molecular properties are often considered as the major factor and usually reported in the literature without ro-vibrational corrections. However, there are many cases where the nuclear motion contributions are significant and even larger than the electronic contribution. In order to obtain accurate theoretical predictions, nuclear motion effects on molecular properties need to be taken into account. The computed isotropic hyperfine coupling constants for the nonvibrating methyl radical CH_3 are far from the experimental values. For CH_3, we have calculated the vibrational-state-dependence of the isotropic hyperfine coupling constant in the electronic ground state. The vibrational wavefunctions used in the averaging procedure were obtained variationally with the TROVE program. Analytical representations for the potential energy surfaces and the hyperfine coupling constant surfaces are obtained in least-squares fitting procedures. Thermal averaging has been carried out for molecules in thermal equilibrium, i.e., with Boltzmann-distributed populations. The calculation methods and the results will be discussed in detail.

  9. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  10. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  11. Program Baseline Change Control Procedure

    International Nuclear Information System (INIS)

    1993-02-01

    This procedure establishes the responsibilities and process for approving initial issues of and changes to the technical, cost, and schedule baselines, and selected management documents developed by the Office of Civilian Radioactive Waste Management (OCRWM) for the Civilian Radioactive Waste Management System. This procedure implements the OCRWM Baseline Management Plan and DOE Order 4700.1, Chg 1. It streamlines the change control process to enhance integration, accountability, and traceability of Level 0 and Level I decisions through standardized Baseline Change Proposal (BCP) forms to be used by the Level 0, 1, 2, and 3 Baseline Change Control Boards (BCCBs) and to be tracked in the OCRWM-wide Configuration Information System (CIS) Database.This procedure applies to all technical, cost, and schedule baselines controlled by the Energy System Acquisition Advisory Board (ESAAB) BCCB (Level 0) and, OCRWM Program Baseline Control Board (PBCCB) (Level 1). All baseline BCPs initiated by Level 2 or lower BCCBs, which require approval from ESAAB or PBCCB, shall be processed in accordance with this procedure. This procedure also applies to all Program-level management documents controlled by the OCRWM PBCCB

  12. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  13. Quantization Procedures; Sistemas de cuantificacion

    Energy Technology Data Exchange (ETDEWEB)

    Cabrera, J. A.; Martin, R.

    1976-07-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs.

  14. AN OVERVIEW ON AVERAGE SPEED ENFORCEMENT SYSTEM AND ROAD SAFETY EFFECTS

    OpenAIRE

    ILGAZ, ARZU; SALTAN, MEHMET

    2017-01-01

    Averagespeed enforcement system is a new intelligent transportation system applicationthat has gained popularity all over the world following Europe and Australiawhich is recently being applied in Turkey as well. The main task of the systemis measuring the average speeds of motorized vehicles for the purpose oftraffic sanctions. A literature survey related with average speed enforcementsystem was carried out in this study at an international scale. In addition toproviding a comprehensive summ...

  15. Efficiency of performing pulmonary procedures in a shared endoscopy unit: procedure time, turnaround time, delays, and procedure waiting time.

    Science.gov (United States)

    Verma, Akash; Lee, Mui Yok; Wang, Chunhong; Hussein, Nurmalah B M; Selvi, Kalai; Tee, Augustine

    2014-04-01

    The purpose of this study was to assess the efficiency of performing pulmonary procedures in the endoscopy unit in a large teaching hospital. A prospective study from May 20 to July 19, 2013, was designed. The main outcome measures were procedure delays and their reasons, duration of procedural steps starting from patient's arrival to endoscopy unit, turnaround time, total case durations, and procedure wait time. A total of 65 procedures were observed. The most common procedure was BAL (61%) followed by TBLB (31%). Overall procedures for 35 (53.8%) of 65 patients were delayed by ≥ 30 minutes, 21/35 (60%) because of "spillover" of the gastrointestinal and surgical cases into the time block of pulmonary procedure. Time elapsed between end of pulmonary procedure and start of the next procedure was ≥ 30 minutes in 8/51 (16%) of cases. In 18/51 (35%) patients there was no next case in the room after completion of the pulmonary procedure. The average idle time of the room after the end of pulmonary procedure and start of next case or end of shift at 5:00 PM if no next case was 58 ± 53 minutes. In 17/51 (33%) patients the room's idle time was >60 minutes. A total of 52.3% of patients had the wait time >2 days and 11% had it ≥ 6 days, reason in 15/21 (71%) being unavailability of the slot. Most pulmonary procedures were delayed due to spillover of the gastrointestinal and surgical cases into the block time allocated to pulmonary procedures. The most common reason for difficulty encountered in scheduling the pulmonary procedure was slot unavailability. This caused increased procedure waiting time. The strategies to reduce procedure delays and turnaround times, along with improved scheduling methods, may have a favorable impact on the volume of procedures performed in the unit thereby optimizing the existing resources.

  16. THE PROCEDURE APPLIED IN TRANSLATING JARGON IN ENGLISH PARLIAMENTARY DEBATING INTO INDONESIAN

    Directory of Open Access Journals (Sweden)

    Ni Luh Putu Krisnawati

    2017-05-01

    Full Text Available At present, competition regarding English debating is a common thing. All countries are competing in the World Debating Competition either for high school or university level. The spread of this “popular culture” has made other country to adopt the English debating system and translate that system into their native language. However it cannot be denied that there are also many jargons that need to be translated into the native language without changing the meaning. This research is focused on the jargons of the English parliamentary debating and its translation into Indonesia. The aims of this study are to identify the jargons in English parliamentary debating and its equivalence in Indonesia and also to know the procedures used in translating the jargons in English parliamentary debating into Indonesia. The theory used for this study is the theory proposed by Peter Newmark (1988 regarding the procedure of translation. The findings shows that they are five procedure of translation used in translating the jargons of English parliamentary debating into Indonesia namely literal translation, functional equivalent, couplets, transference, and naturalization.

  17. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  18. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  19. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  20. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  1. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  2. Evidence on a Real Business Cycle Model with Neutral and Investment-Specific Technology Shocks using Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2010-01-01

    textabstractThe empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is

  3. Developing Competency in Payroll Procedures

    Science.gov (United States)

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  4. Proof Rules for Recursive Procedures

    NARCIS (Netherlands)

    Hesselink, Wim H.

    1993-01-01

    Four proof rules for recursive procedures in a Pascal-like language are presented. The main rule deals with total correctness and is based on results of Gries and Martin. The rule is easier to apply than Martin's. It is introduced as an extension of a specification format for Pascal-procedures, with

  5. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    Science.gov (United States)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  6. Zernike phase-contrast electron cryotomography applied to marine cyanobacteria infected with cyanophages.

    Science.gov (United States)

    Dai, Wei; Fu, Caroline; Khant, Htet A; Ludtke, Steven J; Schmid, Michael F; Chiu, Wah

    2014-11-01

    Advances in electron cryotomography have provided new opportunities to visualize the internal 3D structures of a bacterium. An electron microscope equipped with Zernike phase-contrast optics produces images with markedly increased contrast compared with images obtained by conventional electron microscopy. Here we describe a protocol to apply Zernike phase plate technology for acquiring electron tomographic tilt series of cyanophage-infected cyanobacterial cells embedded in ice, without staining or chemical fixation. We detail the procedures for aligning and assessing phase plates for data collection, and methods for obtaining 3D structures of cyanophage assembly intermediates in the host by subtomogram alignment, classification and averaging. Acquiring three or four tomographic tilt series takes ∼12 h on a JEM2200FS electron microscope. We expect this time requirement to decrease substantially as the technique matures. The time required for annotation and subtomogram averaging varies widely depending on the project goals and data volume.

  7. Procedure and apparatus for back projection

    International Nuclear Information System (INIS)

    1977-01-01

    (1) The procedure is for back projection in the form of a tomographic picture of a member, characterised in that the strip pictures are written onto the signal plate by a conversion pick-up unit; and that to the address inputs of the pick-up unit voltages are applied that represent a rotating coordinate system. (2) Procedure following claim 1 characterised by the fact that the voltages are respectively applied as sawtooth waveform horizontal and vertical television deflections. (3) Procedure following claims 1 and 2, characterised that in order to correct the television deflection voltages for the effect of a fan shaped radiation beam, first one and then the other of the amplitudes is modulated. (G.C.)

  8. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  9. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  10. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  11. GRID PRICING VERSUS AVERAGE PRICING FOR SLAUGHTER CATTLE: AN EMPIRICAL ANALYSIS

    OpenAIRE

    Fausti, Scott W.; Qasmi, Bashir A.

    1999-01-01

    The paper compares weekly producer revenue under grid pricing and average dressed weight pricing methods for 2560 cattle over a period of 102 weeks. Regression analysis is applied to identify factors affecting the revenue differential.

  12. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    Science.gov (United States)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what

  13. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    Hollanda Vasconcellos, L.M. de.

    1984-01-01

    The average concentrations of 226 Ra and 210 Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226 Ra and 210 Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210 Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210 Bi beta emissions with a Geiger Muller Counter. (M.A.C.) [pt

  14. 75 FR 5170 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2010-02-01

    ...)] Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed Railroad Cost Recovery Procedures Productivity Adjustment. SUMMARY: In a decision served... railroad productivity for the 2004-2008 (5-year) averaging period. This is a decline of 0.5 of a percentage...

  15. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT; Procedimientos de Control de Calildad de las Camaras de Muones del Experimento CMS Construidas en el CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-07-01

    In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.

  16. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  17. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  18. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  19. Transversal of Procedure Elements in a Computerized Procedure System

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Yeonsub [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    APR1400 MCR(Main Control Room) has introduced CPS(Computerized Procedure System). APR400 CPS is framed with overview pane, step detail pane, and monitoring pane. The overview shows the step in one dimensional array. Diagram in the step detail pane is called as flowlogic diagram. The flowlogic diagram has advantage of being intuitive and not being segmented so that the interfaces are easy to follow and understand. While developing CPS software or training operators these interfaces, however, there is still areas to refine behaviors of flowlogic diagram. If flowlogic diagram has contingency instructions, it is sometime difficult to interpret it properly. Therefore this paper is trying to explain behavior of flowlogic diagram in view of procedure elements. Procedure execution can be called as transversal of steps and instructions in flowlogic diagram. Both hierarchy of procedures elements and flowlogic diagram provides useful framework to test transversal with completeness. Navigation button behaviors are interpreted in view of flowlogic diagram. The framework and flowlogic diagram can be further applied to improve CPS user interfaces.

  20. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  1. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  2. Zernike Phase Contrast Electron Cryo-Tomography Applied to Marine Cyanobacteria Infected with Cyanophages

    Science.gov (United States)

    Dai, Wei; Fu, Caroline; Khant, Htet A.; Ludtke, Steven J.; Schmid, Michael F.; Chiu, Wah

    2015-01-01

    Advances in electron cryo-tomography have provided a new opportunity to visualize the internal 3D structures of a bacterium. An electron microscope equipped with Zernike phase contrast optics produces images with dramatically increased contrast compared to images obtained by conventional electron microscopy. Here we describe a protocol to apply Zernike phase plate technology for acquiring electron tomographic tilt series of cyanophage-infected cyanobacterial cells embedded in ice, without staining or chemical fixation. We detail the procedures for aligning and assessing phase plates for data collection, and methods to obtain 3D structures of cyanophage assembly intermediates in the host, by subtomogram alignment, classification and averaging. Acquiring three to four tomographic tilt series takes approximately 12 h on a JEM2200FS electron microscope. We expect this time requirement to decrease substantially as the technique matures. Time required for annotation and subtomogram averaging varies widely depending on the project goals and data volume. PMID:25321408

  3. Impression Procedures for Metal Frame Removable Partial Dentures as Applied by General Dental Practitioners.

    Science.gov (United States)

    Fokkinga, Wietske A; van Uchelen, Judith; Witter, Dick J; Mulder, Jan; Creugers, Nico H J

    2016-01-01

    This pilot study analyzed impression procedures for conventional metal frame removable partial dentures (RPDs). Heads of RPD departments of three dental laboratories were asked to record features of all incoming impressions for RPDs during a 2-month period. Records included: (1) impression procedure, tray type (stock/custom), impression material (elastomer/alginate), use of border-molding material (yes/no); and (2) RPD type requested (distal-extension/tooth-bounded/combination). Of the 132 total RPD impressions, 111 (84%) involved custom trays, of which 73 (55%) were combined with an elastomer. Impression border-molding material was used in 4% of the cases. Associations between impression procedure and RPD type or dentists' year/university of graduation were not found.

  4. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  5. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    Energy Technology Data Exchange (ETDEWEB)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in

  6. Unified Subharmonic Oscillation Conditions for Peak or Average Current Mode Control

    OpenAIRE

    Fang, Chung-Chieh

    2013-01-01

    This paper is an extension of the author's recent research in which only buck converters were analyzed. Similar analysis can be equally applied to other types of converters. In this paper, a unified model is proposed for buck, boost, and buck-boost converters under peak or average current mode control to predict the occurrence of subharmonic oscillation. Based on the unified model, the associated stability conditions are derived in closed forms. The same stability condition can be applied to ...

  7. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  8. Averaging problem in general relativity, macroscopic gravity and using Einstein's equations in cosmology.

    Science.gov (United States)

    Zalaletdinov, R. M.

    1998-04-01

    The averaging problem in general relativity is briefly discussed. A new setting of the problem as that of macroscopic description of gravitation is proposed. A covariant space-time averaging procedure is described. The structure of the geometry of macroscopic space-time, which follows from averaging Cartan's structure equations, is described and the correlation tensors present in the theory are discussed. The macroscopic field equations (averaged Einstein's equations) derived in the framework of the approach are presented and their structure is analysed. The correspondence principle for macroscopic gravity is formulated and a definition of the stress-energy tensor for the macroscopic gravitational field is proposed. It is shown that the physical meaning of using Einstein's equations with a hydrodynamic stress-energy tensor in looking for cosmological models means neglecting all gravitational field correlations. The system of macroscopic gravity equations to be solved when the correlations are taken into consideration is given and described.

  9. High-precision prostate cancer irradiation by clinical application of an offline patient setup verification procedure, using portal imaging

    International Nuclear Information System (INIS)

    Bel, Arjan; Vos, Pieter H.; Rodrigus, Patrick T. R.; Creutzberg, Carien L.; Visser, Andries G.; Stroom, Joep C.; Lebesque, Joos V.

    1996-01-01

    Purpose: To investigate in three institutions, The Netherlands Cancer Institute (Antoni van Leeuwenhoek Huis [AvL]), Dr. Daniel den Hoed Cancer Center (DDHC), and Dr, Bernard Verbeeten Institute (BVI), how much the patient setup accuracy for irradiation of prostate cancer can be improved by an offline setup verification and correction procedure, using portal imaging. Methods and Materials: The verification procedure consisted of two stages. During the first stage, setup deviations were measured during a number (N max ) of consecutive initial treatment sessions. The length of the average three dimensional (3D) setup deviation vector was compared with an action level for corrections, which shrunk with the number of setup measurements. After a correction was applied, N max measurements had to be performed again. Each institution chose different values for the initial action level (6, 9, and 10 mm) and N max (2 and 4). The choice of these parameters was based on a simulation of the procedure, using as input preestimated values of random and systematic deviations in each institution. During the second stage of the procedure, with weekly setup measurements, the AvL used a different criterion ('outlier detection') for corrective actions than the DDHC and the BVI ('sliding average'). After each correction the first stage of the procedure was restarted. The procedure was tested for 151 patients (62 in AvL, 47 in DDHC, and 42 in BVI) treated for prostate carcinoma. Treatment techniques and portal image acquisition and analysis were different in each institution. Results: The actual distributions of random and systematic deviations without corrections were estimated by eliminating the effect of the corrections. The percentage of mean (systematic) 3D deviations larger than 5 mm was 26% for the AvL and the DDHC, and 36% for the BVI. The setup accuracy after application of the procedure was considerably improved (percentage of mean 3D deviations larger than 5 mm was 1.6% in the

  10. Human Reliability Analysis For Computerized Procedures

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Gertman, David I.; Le Blanc, Katya

    2011-01-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  11. Evaluation of the occupational dose in hemodynamic procedures

    International Nuclear Information System (INIS)

    Silva, Amanda J. da; Fernandes, Ivani M.; Sordi, Gian Maria A.A.; Carneiro, Janete C. Gaburo

    2010-01-01

    The purpose of this study was to evaluate the dose received by health professionals in a hemodynamic service. It was necessary to know the profile of these professional, to carry out a survey the occupational external doses during the years 2000 to 2009 and to evaluate the distribution of the effective dose from the special procedures guided by fluoroscopy. A self-applied questionnaire was used to delineate the profile of health professionals, taking into account variables such as gender, age, individual monitoring time, number of jobs and tasks performed in the sector. In addition, it was performed an examination of the external individual monitoring doses from the records of the institution. The sample was composed of 35 professionals, 11 males and 24 females, with mean age of (43.0 +- 10.4) years. The average monitoring time of individuals analyzed within the institution was (11.3 +- 9.1) years, considering the period before the study. The minimum record dose level was 0.2 mSv and the maximum dose was 22.7 mSv. Doctors and nursing assistants were the professionals more exposed to radiation, due probably remaining closer to the examination table and X-ray tube during the interventional procedure. (author)

  12. Aromatherapy for reducing colonoscopy related procedural anxiety and physiological parameters: a randomized controlled study.

    Science.gov (United States)

    Hu, Pei-Hsin; Peng, Yen-Chun; Lin, Yu-Ting; Chang, Chi-Sen; Ou, Ming-Chiu

    2010-01-01

    Colonoscopy is generally tolerated, some patients regarding the procedure as unpleasant and painful and generally performed with the patient sedated and receiving analgesics. The effect of sedation and analgesia for colonoscopy is limited. Aromatherapy is also applied to gastrointestinal endoscopy to reduce procedural anxiety. There is lack of information about aromatherapy specific for colonoscopy. In this study, we aimed to performed a randomized controlled study to investigate the effect of aromatherapy on relieve anxiety, stress and physiological parameters of colonoscopy. A randomized controlled trail was carried out and collected in 2009 and 2010. The participants were randomized in two groups. Aromatherapy was then carried out by inhalation of Sunflower oil (control group) and Neroli oil (Experimental group). The anxiety index was evaluated by State Trait Anxiety Inventory-state (STAI-S) score before aromatherapy and after colonoscopy as well as the pain index for post-procedural by visual analogue scale (VAS). Physiological indicators, such as blood pressure (systolic and diastolic blood pressure), heart rate and respiratory rate were evaluated before and after aromatherapy. Participates in this study were 27 subjects, 13 in control group and 14 in Neroli group with average age 52.26 +/- 17.79 years. There was no significance of procedural anxiety by STAI-S score and procedural pain by VAS. The physiological parameters showed a significant lower pre- and post-procedural systolic blood pressure in Neroli group than control group. Aromatic care for colonoscopy, although with no significant effect on procedural anxiety, is an inexpensive, effective and safe pre-procedural technique that could decrease systolic blood pressure.

  13. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  14. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  15. 13 CFR 124.507 - What procedures apply to competitive 8(a) procurements?

    Science.gov (United States)

    2010-01-01

    ... boundaries are eligible to submit offers. (3) Competition for all non-construction requirements. Except for... procedures. Procuring activities will conduct competitions among and evaluate offers received from... the Participant submitted its initial offer which includes price. (5) If the procuring activity...

  16. Improving the timeliness of procedures in a pediatric endoscopy suite.

    Science.gov (United States)

    Tomer, Gitit; Choi, Steven; Montalvo, Andrea; Sutton, Sheila; Thompson, John; Rivas, Yolanda

    2014-02-01

    Pediatric endoscopic procedures are essential in the evaluation and treatment of gastrointestinal diseases in children. Although pediatric endoscopists are greatly interested in increasing efficiency and through-put in pediatric endoscopy units, there is scarcely any literature on this critical process. The goal of this study was to improve the timeliness of pediatric endoscopy procedures at Children's Hospital at Montefiore. In June 2010, a pediatric endoscopy quality improvement initiative was formed at Children's Hospital at Montefiore. We identified patient-, equipment-, and physician-related causes for case delays. Pareto charts, cause and effect diagrams, process flow mapping, and statistical process control charts were used for analysis. From June 2010 to December 2012, we were able to significantly decrease the first case endoscopy delay from an average of 17 to 10 minutes (P < .001), second case delay from 39 to 25 minutes (P = .01), third case delay from 61 to 45 minutes (P = .05), and fourth case delay from 79 to 51 minutes (P = .05). Total delay time decreased from 196 to 131 minutes, resulting in a reduction of 65 minutes (P = .02). From June 2010 to August 2011 (preintervention period), an average of 36% of first endoscopy cases started within 5 minutes, 51% within 10 minutes, and 61% within 15 minutes of the scheduled time. From September 2011 to December 2012 (postintervention period), the percentage of cases starting within 5 minutes, 10 minutes, and 15 minutes increased to 47% (P = .07), 61% (P = .04), and 79% (P = .01), respectively. Applying quality improvement methods and tools helped improve pediatric endoscopy timeliness and significantly decreased total delays.

  17. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  18. GRUKON - A package of applied computer programs system input and operating procedures of functional modules

    International Nuclear Information System (INIS)

    Sinitsa, V.V.; Rineiskij, A.A.

    1993-04-01

    This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)

  19. 22 CFR 309.17 - Procedures for salary offset.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Procedures for salary offset. 309.17 Section 309.17 Foreign Relations PEACE CORPS DEBT COLLECTION Salary Offset § 309.17 Procedures for salary offset. Unless otherwise provided by statute or contract, the following procedures apply to salary offset: (a...

  20. Coarse-mesh discretized low-order quasi-diffusion equations for subregion averaged scalar fluxes

    International Nuclear Information System (INIS)

    Anistratov, D. Y.

    2004-01-01

    In this paper we develop homogenization procedure and discretization for the low-order quasi-diffusion equations on coarse grids for core-level reactor calculations. The system of discretized equations of the proposed method is formulated in terms of the subregion averaged group scalar fluxes. The coarse-mesh solution is consistent with a given fine-mesh discretization of the transport equation in the sense that it preserves a set of average values of the fine-mesh transport scalar flux over subregions of coarse-mesh cells as well as the surface currents, and eigenvalue. The developed method generates numerical solution that mimics the large-scale behavior of the transport solution within assemblies. (authors)

  1. Token Economy: A Systematic Review of Procedural Descriptions.

    Science.gov (United States)

    Ivy, Jonathan W; Meindl, James N; Overley, Eric; Robson, Kristen M

    2017-09-01

    The token economy is a well-established and widely used behavioral intervention. A token economy is comprised of six procedural components: the target response(s), a token that functions as a conditioned reinforcer, backup reinforcers, and three interconnected schedules of reinforcement. Despite decades of applied research, the extent to which the procedures of a token economy are described in complete and replicable detail has not been evaluated. Given the inherent complexity of a token economy, an analysis of the procedural descriptions may benefit future token economy research and practice. Articles published between 2000 and 2015 that included implementation of a token economy within an applied setting were identified and reviewed with a focus on evaluating the thoroughness of procedural descriptions. The results show that token economy components are regularly omitted or described in vague terms. Of the articles included in this analysis, only 19% (18 of 96 articles reviewed) included replicable and complete descriptions of all primary components. Missing or vague component descriptions could negatively affect future research or applied practice. Recommendations are provided to improve component descriptions.

  2. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    International Nuclear Information System (INIS)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual's performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average

  3. 21 CFR 60.44 - Hearing procedures.

    Science.gov (United States)

    2010-04-01

    ... RESTORATION Due Diligence Hearings § 60.44 Hearing procedures. The due diligence hearing shall be conducted in accordance with this part, supplemented by the nonconflicting procedures in part 16. During the due diligence... requesting a hearing under part 16. The standard of due diligence set forth in § 60.36 will apply in the due...

  4. The safety assessment of OPR-1000 nuclear power plant for station blackout accident applying the combined deterministic and probabilistic procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dong Gu, E-mail: littlewing@kins.re.kr [Korea Institute of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2014-08-15

    Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.

  5. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  6. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-01-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core

  7. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    Science.gov (United States)

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  8. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  9. Uncommon combinations of ICD10-PCS or ICD-9-CM operative procedure codes account for most inpatient surgery at half of Texas hospitals.

    Science.gov (United States)

    O'Neill, Liam; Dexter, Franklin; Park, Sae-Hwan; Epstein, Richard H

    2017-09-01

    Recently, there has been interest in activity-based cost accounting for inpatient surgical procedures to facilitate "value based" analyses. Research 10-20years ago, performed using data from 3 large teaching hospitals, found that activity-based cost accounting was practical and useful for modeling surgeons and subspecialties, but inaccurate for individual procedures. We hypothesized that these older results would apply to hundreds of hospitals, currently evaluable using administrative databases. Observational study. State of Texas hospital discharge abstract data for 1st quarter of 2016, 4th quarter of 2015, 1st quarter of 2015, and 4th quarter of 2014. Discharged from an acute care hospital in Texas with at least 1 major therapeutic ("operative") procedure. Counts of discharges for each procedure or combination of procedures, classified by ICD-10-PCS or ICD-9-CM. At the average hospital, most surgical discharges were for procedures performed at most once a month at the hospital (54%, 95% confidence interval [CI] 51% to 55%). At the average hospital, approximately 90% of procedures were performed at most once a month at the hospital (93%, CI 93% to 94%). The percentages were insensitive to the quarter of the year. The percentages were 3% to 6% greater with ICD-10-PCS than for the superseded ICD 9 CM. There are many different procedure codes, and many different combinations of codes, relative to the number of different hospital discharges. Since most procedures at most hospitals are performed no more than once a month, activity-based cost accounting with a sample size sufficient to be useful is impractical for the vast majority of procedures, in contrast to analysis by surgeon and/or subspecialty. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space.

    Science.gov (United States)

    Feng, Lei; Jeon, Tina; Yu, Qiaowen; Ouyang, Minhui; Peng, Qinmu; Mishra, Virendra; Pletikos, Mihovil; Sestan, Nenad; Miller, Michael I; Mori, Susumu; Hsiao, Steven; Liu, Shuwei; Huang, Hao

    2017-12-01

    Animal models of the rhesus macaque (Macaca mulatta), the most widely used nonhuman primate, have been irreplaceable in neurobiological studies. However, a population-averaged macaque brain diffusion tensor imaging (DTI) atlas, including comprehensive gray and white matter labeling as well as bony and facial landmarks guiding invasive experimental procedures, is not available. The macaque white matter tract pathways and microstructures have been rarely recorded. Here, we established a population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space incorporating bony and facial landmarks, and delineated microstructures and three-dimensional pathways of major white matter tracts in vivo MRI/DTI and ex vivo (postmortem) DTI of ten rhesus macaque brains were acquired. Single-subject macaque brain DTI template was obtained by transforming the postmortem high-resolution DTI data into in vivo space. Ex vivo DTI of ten macaque brains was then averaged in the in vivo single-subject template space to generate population-averaged macaque brain DTI atlas. The white matter tracts were traced with DTI-based tractography. One hundred and eighteen neural structures including all cortical gyri, white matter tracts and subcortical nuclei, were labeled manually on population-averaged DTI-derived maps. The in vivo microstructural metrics of fractional anisotropy, axial, radial and mean diffusivity of the traced white matter tracts were measured. Population-averaged digital atlas integrated into in vivo space can be used to label the experimental macaque brain automatically. Bony and facial landmarks will be available for guiding invasive procedures. The DTI metric measurements offer unique insights into heterogeneous microstructural profiles of different white matter tracts.

  11. A model to determine payments associated with radiology procedures.

    Science.gov (United States)

    Mabotuwana, Thusitha; Hall, Christopher S; Thomas, Shiby; Wald, Christoph

    2017-12-01

    Across the United States, there is a growing number of patients in Accountable Care Organizations and under risk contracts with commercial insurance. This is due to proliferation of new value-based payment models and care delivery reform efforts. In this context, the business model of radiology within a hospital or health system context is shifting from a primary profit-center to a cost-center with a goal of cost savings. Radiology departments need to increasingly understand how the transactional nature of the business relates to financial rewards. The main challenge with current reporting systems is that the information is presented only at an aggregated level, and often not broken down further, for instance, by type of exam. As such, the primary objective of this research is to provide better visibility into payments associated with individual radiology procedures in order to better calibrate expense/capital structure of the imaging enterprise to the actual revenue or value-add to the organization it belongs to. We propose a methodology that can be used to determine technical payments at a procedure level. We use a proportion based model to allocate payments to individual radiology procedures based on total charges (which also includes non-radiology related charges). Using a production dataset containing 424,250 radiology exams we calculated the overall average technical charge for Radiology to be $873.08 per procedure and the corresponding average payment to be $326.43 (range: $48.27 for XR and $2750.11 for PET/CT) resulting in an average payment percentage of 37.39% across all exams. We describe how charges associated with a procedure can be used to approximate technical payments at a more granular level with a focus on Radiology. The methodology is generalizable to approximate payment for other services as well. Understanding payments associated with each procedure can be useful during strategic practice planning. Charge-to-total charge ratio can be used to

  12. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  13. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  14. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Energy Technology Data Exchange (ETDEWEB)

    Arapiraca, A. F. C. [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG (Brazil); Mohallem, J. R., E-mail: rachid@fisica.ufmg.br [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil)

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  15. Large eddy simulation in a rotary blood pump: Viscous shear stress computation and comparison with unsteady Reynolds-averaged Navier-Stokes simulation.

    Science.gov (United States)

    Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik

    2018-06-01

    Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.

  16. Calibration and verification of surface contamination meters --- Procedures and techniques

    International Nuclear Information System (INIS)

    Schuler, C; Butterweck, G.; Wernli, C.; Bochud, F.; Valley, J.-F.

    2007-03-01

    A standardised measurement procedure for surface contamination meters (SCM) is presented. The procedure aims at rendering surface contamination measurements to be simply and safely interpretable. Essential for the approach is the introduction and common use of the radionuclide specific quantity 'guideline value' specified in the Swiss Radiation Protection Ordinance as unit for the measurement of surface activity. The according radionuclide specific 'guideline value count rate' can be summarized as verification reference value for a group of radionuclides ('basis guideline value count rate'). The concept can be generalized for SCM of the same type or for SCM of different types using he same principle of detection. A SCM multi source calibration technique is applied for the determination of the instrument efficiency. Four different electron radiation energy regions, four different photon radiation energy regions and an alpha radiation energy region are represented by a set of calibration sources built according to ISO standard 8769-2. A guideline value count rate representing the activity per unit area of a surface contamination of one guideline value can be calculated for any radionuclide using instrument efficiency, radionuclide decay data, contamination source efficiency, guideline value averaging area (100 cm 2 ), and radionuclide specific guideline value. n this way, instrument responses for the evaluation of surface contaminations are obtained for radionuclides without available calibration sources as well as for short-Iived radionuclides, for which the continuous replacement of certified calibration sources can lead to unreasonable costs. SCM verification is based on surface emission rates of reference sources with an active area of 100 cm 2 . The verification for a given list of radionuclides is based on the radionuclide specific quantity guideline value count rate. Guideline value count rates for groups of radionuclides can be represented within the maximum

  17. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  18. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  19. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  20. The 'Own Children' fertility estimation procedure: a reappraisal.

    Science.gov (United States)

    Avery, Christopher; St Clair, Travis; Levin, Michael; Hill, Kenneth

    2013-07-01

    The Full Birth History has become the dominant source of estimates of fertility levels and trends for countries lacking complete birth registration. An alternative, the 'Own Children' method, derives fertility estimates from household age distributions, but is now rarely used, partly because of concerns about its accuracy. We compared the estimates from these two procedures by applying them to 56 recent Demographic and Health Surveys. On average, 'Own Children' estimates of recent total fertility rates are 3 per cent lower than birth-history estimates. Much of this difference stems from selection bias in the collection of birth histories: women with more children are more likely to be interviewed. We conclude that full birth histories overestimate total fertility, and that the 'Own Children' method gives estimates of total fertility that may better reflect overall national fertility. We recommend the routine application of the 'Own Children' method to census and household survey data to estimate fertility levels and trends.

  1. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  2. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  3. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  4. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  5. The step complexity measure for emergency operating procedures: measure verification

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Ha, Jaejoo; Park, Changkue

    2002-01-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human errors play a major role in many accidents. Therefore, to prevent an occurrence of accidents or to ensure system safety, extensive effort has been made to identify significant factors that can cause human errors. According to related studies, written manuals or operating procedures are revealed as one of the most important factors, and the understandability is pointed out as one of the major reasons for procedure-related human errors. Many qualitative checklists are suggested to evaluate emergency operating procedures (EOPs) of NPPs. However, since qualitative evaluations using checklists have some drawbacks, a quantitative measure that can quantify the complexity of EOPs is very necessary to compensate for them. In order to quantify the complexity of steps included in EOPs, Park et al. suggested the step complexity (SC) measure. In addition, to ascertain the appropriateness of the SC measure, averaged step performance time data obtained from emergency training records for the loss of coolant accident and the excess steam dump event were compared with estimated SC scores. Although averaged step performance time data show good correlation with estimated SC scores, conclusions for some important issues that have to be clarified to ensure the appropriateness of the SC measure were not properly drawn because of lack of backup data. In this paper, to clarify remaining issues, additional activities to verify the appropriateness of the SC measure are performed using averaged step performance time data obtained from emergency training records. The total number of available records is 36, and training scenarios are the steam generator tube rupture and the loss of all feedwater. The number of scenarios is 18 each. From these emergency training records, averaged step performance time data for 30 steps are retrieved. As the results, the SC measure shows statistically meaningful

  6. Procedure Redesign Methods : E3-Control: a redesign methodology for control procedures

    NARCIS (Netherlands)

    Liu, J.; Hofman, W.J.; Tan, Y.H.

    2011-01-01

    This chapter highlights the core research methodology, e3-control, that is applied throughout the ITAIDE project for the purpose of control procedure redesign. We present the key concept of the e3-control methodology and its technical guidelines. Based on the output of this chapter, domain experts

  7. An Adaptive Off-Line Procedure for Radiotherapy of Prostate Cancer

    International Nuclear Information System (INIS)

    Nuver, Tonnis T.; Hoogeman, Mischa S.; Remeijer, Peter; Herk, Marcel van; Lebesque, Joos V.

    2007-01-01

    Purpose: To determine the planning target volume (PTV) margin for an adaptive radiotherapy procedure that uses five computed tomography (CT) scans to calculate an average prostate position and rectum shape. To evaluate alternative methods to determine an average rectum based on a single delineation. Methods and Materials: Repeat CT scans (8-13) of 19 patients were used. The contoured prostates of the first four scans were matched on the planning CT (pCT) prostate contours. With the resulting translations and rotations the average prostate position was determined. An average rectum was obtained by either averaging the coordinates of corresponding points on the rectal walls or by selecting the 'best' rectum or transforming the pCT rectum. Dose distributions were calculated for various expanded average prostates. The remaining CT scans were used to determine the dose received by prostate and rectum during treatment. Results: For the prostate of the pCT scan and a 10-mm margin, all patients received more than 95% of the prescribed dose to 95% of the prostate. For the average prostate, a margin of 7 mm was needed to obtain a similar result (average PTV reduction 30%). The average rectum overestimated the mean dose to the rectum by 0.4 ± 1.6 Gy, which was better than the pCT rectum (2.1 ± 3.0 Gy) and the alternative average rectums (1.0 ± 2.6 Gy and 1.4 ± 3.2 Gy). Conclusions: Our adaptive procedure allows for reduction of the PTV margin to 7 mm without decreasing prostate coverage during treatment. For accurate estimation of the rectum dose, rectums need to be delineated and averaged over multiple scans

  8. A comparison of average wages with age-specific wages for assessing indirect productivity losses: analytic simplicity versus analytic precision.

    Science.gov (United States)

    Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J

    2017-07-01

    Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.

  9. Loop Electrosurgical Excision Procedure (LEEP)

    Science.gov (United States)

    ... applied to your cervix to stop any bleeding. Electrocautery also may be used to control bleeding. The ... the uterus at the top of the vagina. Electrocautery: A procedure in which an instrument works with ...

  10. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  11. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  12. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  13. Procedures to evaluate the efficiency of protective clothing worn by operators applying pesticide.

    Science.gov (United States)

    Espanhol-Soares, Melina; Nociti, Leticia A S; Machado-Neto, Joaquim Gonçalves

    2013-10-01

    The evaluation of the efficiency of whole-body protective clothing against pesticides has already been carried out through field tests and procedures defined by international standards, but there is a need to determine the useful life of these garments to ensure worker safety. The aim of this article is to compare the procedures for evaluating efficiency of two whole-body protective garments, both new and previously used by applicators of herbicides, using a laboratory test with a mannequin and in the field with the operator. The evaluation of the efficiency of protective clothing used both quantitative and qualitative methodologies, leading to a proposal for classification according to efficiency, and determination of the useful life of protective clothing for use against pesticides, based on a quantitative assessment. The procedures used were in accordance with the standards of the modified American Society for Testing and Materials (ASTM) F 1359:2007 and International Organization for Standardization 17491-4. The protocol used in the field was World Health Organization Vector Biology and Control (VBC)/82.1. Clothing tested was personal water repellent and pesticide protective. Two varieties of fabric were tested: Beige (100% cotton) and Camouflaged (31% polyester and 69% cotton). The efficiency in exposure control of the personal protective clothing was measured before use and after 5, 10, 20, and 30 uses and washes under field conditions. Personal protective clothing was worn by workers in the field during the application of the herbicide glyphosate on weed species in mature sugar cane plantations using a knapsack sprayer. The modified ASTM 1359:2007 procedure was chosen as the most appropriate due to its greater repeatability (lower coefficient of variation). This procedure provides quantitative evaluation needed to determine the efficiency and useful life of individual protective clothing, not just at specific points of failure, but according to dermal

  14. Subsea HIPPS design procedure

    International Nuclear Information System (INIS)

    Aaroe, R.; Lund, B.F.; Onshus, T.

    1995-01-01

    The paper is based on a feasibility study investigating the possibilities of using a HIPPS (High Integrity Pressure Protection System) to protect a subsea pipeline that is not rated for full wellhead shut-in pressure. The study was called the Subsea OPPS Feasibility Study, and was performed by SINTEF, Norway. Here, OPPS is an acronym for Overpressure Pipeline Protection System. A design procedure for a subsea HIPPS is described, based on the experience and knowledge gained through the ''Subsea OPPS Feasibility Study''. Before a subsea HIPPS can be applied, its technical feasibility, reliability and profitability must be demonstrated. The subsea HIPPS design procedure will help to organize and plan the design activities both with respect to development and verification of a subsea HIPPS. The paper also gives examples of how some of the discussed design steps were performed in the Subsea OPPS Feasibility Study. Finally, further work required to apply a subsea HIPPS is discussed

  15. Magnetic fusion energy. Disaster operation procedures

    International Nuclear Information System (INIS)

    1986-06-01

    In a major disaster such as an earthquake, toxic chemical release, or fire, these Disaster Operations Procedures can be used, in combination with good judgment, to minimize the risk of injury to personnel and of property damage in our laboratory, shop, and office areas. These emergency procedures apply to all personnel working within MFE/Zone-11 area including visitors, program contract personnel, and construction contract personnel

  16. Step Complexity Measure for Emergency Operating Procedures - Determining Weighting Factors

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Kim, Jaewhan; Ha, Jaejoo

    2003-01-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human error has been regarded as the primary cause of many events. Therefore, to ensure system safety, extensive effort has been made to identify the significant factors that can cause human error. According to related studies, written manuals or operating procedures are revealed as one of the important factors, and the understandability is pointed out as one of the major reasons for procedure-related human errors.Many qualitative checklists have been suggested to evaluate emergency operating procedures (EOPs) of NPPs so as to minimize procedure-related human errors. However, since qualitative evaluations using checklists have some drawbacks, a quantitative measure that can quantify the complexity of EOPs is indispensable.From this necessity, Park et al. suggested the step complexity (SC) measure to quantify the complexity of procedural steps included in EOPs. To verify the appropriateness of the SC measure, averaged step performance time data obtained from emergency training records of the loss-of-coolant accident (LOCA) and the excess steam demand event were compared with estimated SC scores. However, although averaged step performance time data and estimated SC scores show meaningful correlation, some important issues such as determining proper weighting factors have to be clarified to ensure the appropriateness of the SC measure. These were not properly dealt with due to a lack of backup data.In this paper, to resolve one of the important issues, emergency training records are additionally collected and analyzed in order to determine proper weighting factors. The total number of collected records is 66, and the training scenarios cover five emergency conditions including the LOCA, the steam generator tube rupture, the loss of all feedwater, the loss of off-site power, and the station blackout. From these records, average step performance time data are retrieved, and new

  17. Radiation doses to patients in haemodynamic procedures

    International Nuclear Information System (INIS)

    Canadillas-Perdomo, B.; Catalan-Acosta, A.; Hernandez-Armas, J.; Perez-Martin, C.; Armas-Trujillo, D. de

    2001-01-01

    Interventional radio-cardiology gives high doses to patients due to high values of fluoroscopy times and large series of radiographic images. The main objective of the present work is the determination of de dose-area product (DAP) in patients of three different types of cardiology procedures with X-rays. The effective doses were estimated trough the organ doses values measured with thermoluminescent dosimeters (TLDs-100), suitable calibrated, placed in a phantom type Rando which was submitted to the same radiological conditions corresponding to the procedures made on patients. The values for the effective doses in the procedures CAD Seldinger was 6.20 mSv on average and 1.85mSv for pacemaker implants. (author)

  18. Radiation doses to patients in haemodynamic procedures

    Energy Technology Data Exchange (ETDEWEB)

    Canadillas-Perdomo, B; Catalan-Acosta, A; Hernandez-Armas, J [Servicio de Fisica Medica, Hospital Universitario de Canarias, La Laguna, Tenerife (Spain); Perez-Martin, C [Servicio de Ingenieria Biomedica, Hospital Universitario de Canarias, La Laguna, Tenerife (Spain); Armas-Trujillo, D de [Servicio de Cardiologia, Hospital Universitario de Canarias, La Laguna, Tenerife (Spain)

    2001-03-01

    Interventional radio-cardiology gives high doses to patients due to high values of fluoroscopy times and large series of radiographic images. The main objective of the present work is the determination of de dose-area product (DAP) in patients of three different types of cardiology procedures with X-rays. The effective doses were estimated trough the organ doses values measured with thermoluminescent dosimeters (TLDs-100), suitable calibrated, placed in a phantom type Rando which was submitted to the same radiological conditions corresponding to the procedures made on patients. The values for the effective doses in the procedures CAD Seldinger was 6.20 mSv on average and 1.85mSv for pacemaker implants. (author)

  19. 49 CFR 1572.405 - Procedures for collection by TSA.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Procedures for collection by TSA. 1572.405 Section... Procedures for collection by TSA. This section describes the procedures that an individual, who applies to obtain or renew an HME for a CDL, must follow if a TSA agent collects and transmits the Information...

  20. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  1. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework......This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  2. A procedure for noise uncoupling in laser interferometry

    CERN Document Server

    Barone, F; Rosa, R D; Eleuteri, A; Milano, L; Qipiani, K

    2002-01-01

    A numerical procedure for noise recognition and uncoupling is described. The procedure is applied to a Michelson interferometer and is effective in seismic and acoustic noise uncoupling from the output signal of the interferometer. Due to the low data flow coming from the instrumentation this uncoupling can be performed in real time and it is useful as a data quality procedure for interferometer data output.

  3. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  6. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  7. The relationship between frontal and temporal lobe lesions in traumatic brain injury and procedural memory

    International Nuclear Information System (INIS)

    Kato, Noriaki; Okazaki, Tetsuya; Hachisuka, Kenji

    2008-01-01

    We examined the correlation between the location of chronic phase brain damage identified by a head MRI and the procedural memory test results in patients who have sustained a traumatic brain injury (TBI). Subjects were 27 patients with TBI, who completed all of three procedural memory tasks (mirror-reading, mirror-drawing, and Tower of Toronto). Using a head MRI, the presence or absence of lesions in the frontal lobe and the temporal lobe were determined. To evaluate declarative memory, we implemented the Wechsler Memory Scale-Rivesed (WMS-R), Rivermead Behavioral Memory Test (RBMT), and Rey-Osterrieth Complex Figure Test (3-minute delayed recall). All three of procedural memory tasks were repeated 3 times a day for 3 consecutive days. The rate of improvement (%) of the procedural memory task was determined as {average of the results on the first day- average of the results on the third day)/average of the results on the first day} x 100. We obtained the rate of improvement for each of the three tasks. The patients were divided according to the existence of frontal and temporal lobe lesions in brain MRI, and then rates of improvement were compared by the existence of frontal or temporal lesion using the Mann-Whitney test. In result, the average value of the declarative memory test results was within the range of disorders for all items. On the procedural memory tasks, the rate of improvement did not significantly decrease by the presence of frontal or temporal lobe lesion. It is believed that the basal ganglia and the cerebellum are significantly involved in procedural memory. Also in TBI patients, the procedural memory tends to be retained. Our results suggest that frontal and temporal lobe lesions, which are frequently found in traumatic brain injury, are not likely to be related to procedural memory. (author)

  8. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  9. Radiologist and angiographic procedures. Absorbed radiation dose

    International Nuclear Information System (INIS)

    Tryhus, M.; Mettler, F.A. Jr.; Kelsey, C.

    1987-01-01

    The radiation dose absorbed by the angiographer during angiographic procedures is of vital importance to the radiologist. Nevertheless, most articles on the subject are incomplete, and few measure gonadal dose. In this study, three TLDs were used for each of the following sites: radiologist's eyes, thyroid, gonads with and without shielding apron, and hands. The average dose during carotid angiograms was 2.6, 4.1, 0.4, 4.7, and 7.1 mrads to the eyes, thyroid, gonads with and without .5 mm of lead shielding, and hands, respectively. Average dose during abdominal and peripheral vascular angiographic procedures was 5.2, 7.5, 1.2, 8.5, and 39.9 mrads to the eyes, thyroid, gonads with and without shielding, and hands, respectively. A literature review demonstrates a significant reduction in radiation dose to the angiographer after the advent of automated injectors. Our measured doses for carotid angiography are compatible with contemporary reported values. There was poor correlation with fluoroscopy time and measured dose to the angiographer

  10. efficient screening procedure for black sigatoka disease of banana

    African Journals Online (AJOL)

    ACSS

    3 University of Copenhagen, Faculty of Science, Department of Plant and Environmental Sciences, ... procedure for black sigatoka disease in order to provide a reliable controlled environment ..... Inoculation of emerging leaves, on average.

  11. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  12. Outcome of Boyd-McLeod procedure for recalcitrant lateral epicondylitis of elbow.

    Science.gov (United States)

    Reddy, V R M; Satheesan, K S; Bayliss, N

    2011-08-01

    Various surgical procedures including percutaneous and open release and arthroscopic procedures have been described to treat recalcitrant tennis elbow. We present the outcome of Boyd-McLeod surgical procedure for tennis elbow resistant to non-operative treatment in twenty-seven patients (twenty-nine limbs). Boyd McLeod procedure involves excision of the proximal portion of the annular ligament, release of the origin of the extensor muscles, excision of the bursa if present, and excision of the synovial fringe. The average time interval from the onset of symptoms of tennis elbow until surgery was 28 months (range 8-72 months). Of those patients, 91% reported complete relief of symptoms with return to full normal activities including sports. Average post-operative time for return to professional/recreational activity was 5 weeks. One case developed pain secondary to ectopic bone formation after surgery, which settled after excision, and in another there was no pain relief with Boyd McLeod procedure. Two patients had scar tenderness that did not affect the final outcome. We conclude that Boyd-McLeod procedure is an effective treatment option in patients with resistant lateral epicondylitis.

  13. Performance of the Seven-step Procedure in Problem-based Hospitality Management Education

    Directory of Open Access Journals (Sweden)

    Wichard Zwaal

    2016-12-01

    Full Text Available The study focuses on the seven-step procedure (SSP in problem-based learning (PBL. The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend on each step and the perceived quality of execution of the procedure. A survey was administered to a sample of 101 students enrolled in a problem-based hospitality management program. Results show that students consider step 6 (Collect additional information outside the group to be most important. The highest performance-rating is for step two (Define the problem and the lowest for step four (Draw a systemic inventory of explanations from step three. Step seven is classified as low in performance and high in importance implicating urgent attention. The average amount of time spent on the seven steps is 133 minutes with the largest part of the time spent on self-study outside the group (42 minutes. The assessment of the execution of a set of specific guidelines (the Blue Card did not completely match with the overall performance ratings for the seven steps. The SSP could be improved by reducing the number of steps and incorporating more attention to group dynamics.

  14. Applying Petroleum the Pressure Buildup Well Test Procedure on Thermal Response Test—A Novel Method for Analyzing Temperature Recovery Period

    Directory of Open Access Journals (Sweden)

    Tomislav Kurevija

    2018-02-01

    Full Text Available The theory of Thermal Response Testing (TRT is a well-known part of the sizing process of the geothermal exchange system. Multiple parameters influence the accuracy of effective ground thermal conductivity measurement; like testing time, variable power, climate interferences, groundwater effect, etc. To improve the accuracy of the TRT, we introduced a procedure to additionally analyze falloff temperature decline after the power test. The method is based on a premise of analogy between TRT and petroleum well testing, since the origin of both procedures lies in the diffusivity equation with solutions for heat conduction or pressure analysis during radial flow. Applying pressure build-up test interpretation techniques to borehole heat exchanger testing, greater accuracy could be achieved since ground conductivity could be obtained from this period. Analysis was conducted on a coaxial exchanger with five different power steps, and with both direct and reverse flow regimes. Each test was set with 96 h of classical TRT, followed by 96 h of temperature decline, making for almost 2000 h of cumulative borehole testing. Results showed that the ground conductivity value could vary by as much as 25%, depending on test time, seasonal period and power fluctuations, while the thermal conductivity obtained from the falloff period provided more stable values, with only a 10% value variation.

  15. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  16. Approximative Krieger-Nelkin orientation averaging and anisotropy of water molecules vibrations

    International Nuclear Information System (INIS)

    Markovic, M.I.

    1974-01-01

    Quantum-mechanics approach of water molecules dynamics should be taken into account for precise theoretical calculation of differential scattering cross sections of neutrons. Krieger and Nelkin have proposed an approximate method for averaging orientation of molecules regarding directions of incoming and scattered neutron. This paper shows that this approach can be successfully applied for general shape of water molecule vibration anisotropy

  17. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  18. Dose evaluation in special fluoroscopy procedures: Hysterosalpingography and Dacryocystography

    International Nuclear Information System (INIS)

    Lopes, Cintya Carolina Barbosa

    2006-04-01

    The hysterosalpingography (HSG) and dacryocystography (DCG) are among the special fluoroscopy procedures. The HSG is a radiodiagnostic technique used to detect uterine and tubal pathologies and it is fundamental for the investigation of infertility. The DCG is a form of lacrimal system imaging, being important to show the level of obstruction, the presence of dilatation of the lacrimal sac, as well as alterations in nearby structures. At this research, the study of skin entrance dose was evaluated for these two special fluoroscopy procedures, besides the analyses of staff doses whose performs the exams. The exams of 22 HSG patients and 8 DCG patients were evaluated using TL-100 dosimeters attached on patient' skin at anatomical landmarks evolved on each exam. In the case of HSG, the results showed that skin entrance doses varied from 0.5 mGy to 73.4 mGy, with an average value of 22.1 mGy. The estimated uterus dose was 5.5 mGy, and 6.6 mGy was the average dose estimated to the ovaries. The patient' skin entrance dose undergoing to DCG examinations varied from 2.1 mGy to 10.6 mGy, and the average eye's dose was 6.1 mGy. The results of staff dose showed that, on HSG, the average dose on doctor's right hand was 4.3 mGy per examination. This value had to the fact that the physician introduces the contrast manually while all contrast exposures. In relation of DCG, the staff's dose values were nearby background radiation, evidencing that, inside of permitted limits, there is no risk for the physicians at this procedure. (author)

  19. A more general expression for the average X-ray diffraction intensity of crystals with an incommensurate one-dimensional modulation

    International Nuclear Information System (INIS)

    Lam, E.J.W.; Beurskens, P.T.; Smaalen, S. van

    1994-01-01

    Statistical methods are used to derive an expression for the average X-ray diffraction intensity, as a function of (sinθ)/λ, of crystals with an incommensurate one-dimensional modulation. Displacive and density modulations are considered, as well as a combination of these two. The atomic modulation functions are given by truncated Fourier series that may contain higher-order harmonics. The resulting expression for the average X-ray diffraction intensity is valid for main reflections and low-order satellite reflections. The modulation of individual atoms is taken into account by the introduction of overall modulation amplitudes. The accuracy of this expression for the average X-ray diffraction intensity is illustrated by comparison with model structures. A definition is presented for normalized structure factors of crystals with an incommensurate one-dimensional modulation that can be used in direct-methods procedures for solving the phase problem in X-ray crystallography. A numerical fitting procedure is described that can extract a scale factor, an overall temperature parameter and overall modulation amplitudes from experimental reflection intensities. (orig.)

  20. Variational method for inverting the Kohn-Sham procedure

    International Nuclear Information System (INIS)

    Kadantsev, Eugene S.; Stott, M.J.

    2004-01-01

    A procedure based on a variational principle is developed for determining the local Kohn-Sham (KS) potential corresponding to a given ground-state electron density. This procedure is applied to calculate the exchange-correlation part of the effective Kohn-Sham (KS) potential for the neon atom and the methane molecule

  1. Implementation of procedures to NPP Krsko INTRANEK

    International Nuclear Information System (INIS)

    Gradisar, D.; Krajnc, M.; Kocnar, R.; Spiler, J.

    1999-01-01

    Part of NEK documentation has already been presented on NEK Intranet such as USAR, Technical Specifications, QA Plan as well as some frequently used series of drawings. At the time being the process of presentation of all procedures (thereinafter INTRANEK procedures) is in progress. The purpose of this project is the presentation of 1600 procedures with average size of 30 pages what is more than 48000 pages altogether. ADOBE PDF (Portable Document Format) has been chosen as the most suitable format for the presentation of procedures on INTRANEK. PDF format meets the following criteria: the outlook of a document page is always the same as an original one and cannot be changed without control. In addition to this, full text search is available as well as easy jump from procedure to procedure. Some changes of working process on internal procedures have to be made before the project start, which determine the responsibility of individual users in the process. The work flow, which enables easy daily maintenance, has been prepared, the rules of both procedure numbering as well as folder contents/name have been set and the server selected. The project was managed and implemented with the extensive use of compute-aided management, document distribution and control, databases, electronics mail and Intranet tools. The results of practical implementation of NEK procedures and our experience with INTRANEK are presented in this paper.(author)

  2. Environmental protection and procedural law

    International Nuclear Information System (INIS)

    Mutschler, U.

    1978-01-01

    For the power industry which is 'independent of licensing', the Ule/Laubinger statement as well as its discussion on the 52th German legal experts' day are of considerable importance. It is therefore absolutely necessary to critically investigate the statements of this expert's opinion and the considerations on which they are based. This investigation is limited to those licensing procedures which in the terminology of experts, are 'similar to the plan approval procedure'. This applies mainly to the procedures according to paragraph 4 ff of the Federal Act on the Protection Against Nuisances and paragraph 7 of the Atomic Energy Law: Preliminaries publication of documents, inspection of files, public hearing, taking of evidence, persons with special responsibilities, administrative proceedings, actions by associations. The deficiencies in the execution of environmental procedural law is briefly mentioned. The notes in the article refer only to air pollution. (orig./HP) [de

  3. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  4. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  5. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  6. Modified Kidner procedure utilizing a Mitek bone anchor.

    Science.gov (United States)

    Dawson, D M; Julsrud, M E; Erdmann, B B; Jacobs, P M; Ringstrom, J B

    1998-01-01

    The recent development of small bone suture anchors has created several potential applications in reconstructive surgery of the foot. Mitek bone anchors are simple to insert, require less aggressive dissection and surgical time than reefing of the redundant posterior tibial tendon, and are a reliable method of tendon-to-bone fixation. Mitek bone anchors are an excellent technique for the treatment of redundant tibialis posterior tendon following a modified Kidner procedure. In modified Kidner procedures involving an excessively large os tibiale externum, Mitek anchoring of the redundant tibialis posterior tendon to the navicular bone is an excellent means for secure plication of the posterior tibial tendon in cases involving intraoperative tendon laxity. A description of the Mitek Anchor System and technique of application in a modified Kinder procedure is presented. The purpose of this study was to describe patient satisfaction and long-term clinical outcomes of the modified Kinder procedure with and without the Mitek bone anchoring system. A retrospective study of the modified Kinder procedure was performed with 13 patients being evaluated, seven with Mitek anchoring and six without. The University of Maryland 100-point Painful Foot Center Scoring System was modified to be more specific to the modified Kinder procedure for assessment of subjective long-term results. Patient overall satisfaction was rated good to excellent by 85.6% of patients in the Mitek group and by 100% of patients in the non-Mitek group. Use of the Mitek anchor allowed for quicker postoperative recovery to resumption of ambulation without assistive devices (average of 3 weeks vs. 4.42 weeks) and a quicker return to pain-free ambulation in normal shoegear (average of 4 weeks vs. 6 weeks). Mitek anchoring of the tibialis posterior tendon, theoretically, increases medial arch support as evidenced by 14% of the Mitek group and 67% of the non-Mitek group requiring postoperative orthotics.

  7. Exponentially Weighted Moving Average Chart as a Suitable Tool for Nuchal Translucency Quality Review

    Czech Academy of Sciences Publication Activity Database

    Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana

    2014-01-01

    Roč. 34, č. 4 (2014), s. 367-376 ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014

  8. 19 CFR 191.102 - Procedure.

    Science.gov (United States)

    2010-04-01

    ... (CONTINUED) DRAWBACK Internal Revenue Tax on Flavoring Extracts and Medicinal or Toilet Preparations (Including Perfumery) Manufactured From Domestic Tax-Paid Alcohol § 191.102 Procedure. (a) General. Other provisions of this part relating to direct identification drawback (see subpart B of this part) shall apply...

  9. Required number of records for ASCE/SEI 7 ground-motion scaling procedure

    Science.gov (United States)

    Reyes, Juan C.; Kalkan, Erol

    2011-01-01

    The procedures and criteria in 2006 IBC (International Council of Building Officials, 2006) and 2007 CBC (International Council of Building Officials, 2007) for the selection and scaling ground-motions for use in nonlinear response history analysis (RHA) of structures are based on ASCE/SEI 7 provisions (ASCE, 2005, 2010). According to ASCE/SEI 7, earthquake records should be selected from events of magnitudes, fault distance, and source mechanisms that comply with the maximum considered earthquake, and then scaled so that the average value of the 5-percent-damped response spectra for the set of scaled records is not less than the design response spectrum over the period range from 0.2Tn to 1.5Tn sec (where Tn is the fundamental vibration period of the structure). If at least seven ground-motions are analyzed, the design values of engineering demand parameters (EDPs) are taken as the average of the EDPs determined from the analyses. If fewer than seven ground-motions are analyzed, the design values of EDPs are taken as the maximum values of the EDPs. ASCE/SEI 7 requires a minimum of three ground-motions. These limits on the number of records in the ASCE/SEI 7 procedure are based on engineering experience, rather than on a comprehensive evaluation. This study statistically examines the required number of records for the ASCE/SEI 7 procedure, such that the scaled records provide accurate, efficient, and consistent estimates of" true" structural responses. Based on elastic-perfectly-plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI 7 scaling procedure is applied to 480 sets of ground-motions. The number of records in these sets varies from three to ten. The records in each set were selected either (i) randomly, (ii) considering their spectral shapes, or (iii) considering their spectral shapes and design spectral-acceleration value, A(Tn). As compared to benchmark (that is, "true") responses from unscaled records using a larger catalog of ground

  10. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  11. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    Science.gov (United States)

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  12. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  13. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  14. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  15. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  16. 44 CFR 78.7 - Grant application procedures.

    Science.gov (United States)

    2010-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program FLOOD MITIGATION ASSISTANCE § 78.7 Grant application procedures. States will apply for Technical Assistance and...

  17. Predictive Trailing-Edge Modulation Average Current Control in DC-DC Converters

    Directory of Open Access Journals (Sweden)

    LASCU, D.

    2013-11-01

    Full Text Available The paper investigates predictive digital average current control (PDACC in dc/dc converters using trailing-edge modulation (TEM. The study is focused on the recurrence duty cycle equation and then stability analysis is performed. It is demonstrated that average current control using trailing-edge modulation is stable on the whole range of the duty cycle and thus design problems are highly reduced. The analysis is carried out in a general manner, independent of converter topology and therefore the results can then be easily applied for a certain converter (buck, boost, buck-boost, etc.. The theoretical considerations are confirmed for a boost converter first using the MATLAB program based on state-space equations and finally with the CASPOC circuit simulation package.

  18. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    International Nuclear Information System (INIS)

    Ingram, A.; Golovchak, R.; Kostrzewa, M.; Wacke, S.; Shpotyuk, M.; Shpotyuk, O.

    2012-01-01

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  19. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)

    2012-02-15

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  20. Economic analysis of medical management applied for left colostomy.

    Science.gov (United States)

    Savlovschi, C; Serban, D; Andreescu, Cv; Dascalu, Am; Pantu, H

    2013-01-01

    This paper presents an analysis of surgical treatment costs for left colostomy, aiming to calculate a medium cost per procedure and to identify the means to maximize the economic management of this type of surgicale procedure. A retrospective study was conducted on a group of 8 patients hospitalized in the 4th Surgery Department,Emergency University Hospital Bucharest, during the year 2012 for left colic neoplasms with obstruction signs that were operated on with a left colostomy. The followed parameters in the studied group of patients were represented by medical expenses, divided in: preoperative, intra-operative and immediate postoperative (postop. hospitalization). Two major types of colostomy were performed: left loop colostomy with intact tumour for 6 patients and left end colostomy and tumour resection (Hartmann's procedure) for 2 patients. The medium cost of this type of surgical intervention was 4396.807 RON, representing 1068.742 euro. Statistic data analysis didn't reveal average costs to vary with the type of procedure. The age of the study subjects was between 49 and 88, with an average of 61 years, without it being possible to establish a correlation between patient age and the level of medical spendings. Reducing the costs involved by left colostomy can be efficiently done by decreasing the number of days of hospitalisation in the following ways: preoperative preparation and assessment of the subject in an outpatient regimen; the accuracy of the surgical procedure with the decrease of early postoperative complications and antibiotherapy- the second major cause of increased postoperative costs. Celsius.

  1. Mixture-based gatekeeping procedures in adaptive clinical trials.

    Science.gov (United States)

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  2. 12 CFR 995.4 - Book-entry procedure for Financing Corporation obligations.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Book-entry procedure for Financing Corporation... FINANCING CORPORATION OPERATIONS § 995.4 Book-entry procedure for Financing Corporation obligations. (a) Authority. Any Federal Reserve Bank shall have authority to apply book-entry procedure to Financing...

  3. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...

  4. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  5. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  6. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    Science.gov (United States)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  7. 21 CFR 1303.22 - Procedure for applying for individual manufacturing quotas.

    Science.gov (United States)

    2010-04-01

    ... manufacturing quotas. Any person who is registered to manufacture any basic class of controlled substance listed in Schedule I or II and who desires to manufacture a quantity of such class shall apply on DEA Form 189 for a manufacturing quota for such quantity of such class. Copies of DEA Form 189 may be obtained...

  8. Radiation exposures to technologists from nuclear medicine imaging procedures

    International Nuclear Information System (INIS)

    Sloboda, R.S.; Schmid, M.G.; Willis, C.P.

    1986-05-01

    Radiation exposures incurred by nuclear medicine technologists during diagnostic imaging and gamma camera quality control (QC) were measured on a procedural basis over a three-month period using a portable, low-range, self-reading ion chamber. A total of more than 400 measurements were made for 15 selected procedures. From these, mean procedural exposures and standard deviations were calculated. The results show that daily flood phantom QC, at 0.58 mR, and gated cardiac studies, at 0.45 mR, were the two greatest sources of exposure. Other procedures resulted in exposures varying roughly from 0.10 to 0.20 mR. Difficult patients were responsible for a doubling of technologist exposure for many procedures. Standard deviations were large for all procedures, averaging 65% of the mean values. Comparison of technologist exposure inferred from the procedural measurements with the time coincident collective dose equivalent recorded by the TLD service of the Radiation Protection Bureau indicates that approximately half of the collective technologist exposure arose from patient handling and flood QC

  9. Safety analysis procedures for PHWR

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4

  10. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  11. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  12. On various validity criteria for the configuration average in collisional-radiative codes

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M [Commissariat a l' Energie Atomique, Service ' Photons, Atomes et Molecules' , Centre d' Etudes de Saclay, F91191 Gif-sur-Yvette Cedex (France)

    2008-01-28

    The characterization of out-of-local-thermal-equilibrium plasmas requires the use of collisional-radiative kinetic equations. This leads to the solution of large linear systems, for which statistical treatments such as configuration average may bring considerable simplification. In order to check the validity of this procedure, a criterion based on the comparison between a partial-rate systems and the Saha-Boltzmann solution is discussed in detail here. Several forms of this criterion are discussed. The interest of these variants is that they involve each type of relevant transition (collisional or radiative), which allows one to check separately the influence of each of these processes on the configuration-average validity. The method is illustrated by a charge-distribution analysis in carbon and neon plasmas. Finally, it is demonstrated that when the energy dispersion of every populated configuration is smaller than the electron thermal energy, the proposed criterion is fulfilled in each of its forms.

  13. Evaluation of the radiation levels in different surgical procedures

    International Nuclear Information System (INIS)

    Silva Santos, M.C.P. da; Almeida, A. de; Belluci, A.D.; Ghilardi Neto, T.

    1989-01-01

    The exposure of a medical team to radiation during surgical procedures evidently depends on the duration and difficulties of the particular procedure. We have made a dosimetric study of the average equivalent doses received by the surgical teams during 10 different surgical procedures over a period of one month. Our results demonstrate that the levels of ionizing radiation that the members of the teams receive typically do not surpass the limits stipulated for the following surgical procedures: nefrolithotomy, percutaneous ureterolithomy, ureteroscoy connected with removal of stones in the inferior third of the ureter, osteosynthesis of the tibial plateau together with a articular toalet and minireconstruction of the knee. For compound surgical procedures, for example, osteosynthesis of a femur fracture together with biopsy of an exposed femur or percutaneous nefrolithotomy (complex cases), the dose limits have been surpassed owing to the additional fluoroscopy needed in those p rocedures. (author) [pt

  14. Comparison of mass transport using average and transient rainfall boundary conditions

    International Nuclear Information System (INIS)

    Duguid, J.O.; Reeves, M.

    1976-01-01

    A general two-dimensional model for simulation of saturated-unsaturated transport of radionuclides in ground water has been developed and is currently being tested. The model is being applied to study the transport of radionuclides from a waste-disposal site where field investigations are currently under way to obtain the necessary model parameters. A comparison of the amount of tritium transported is made using both average and transient rainfall boundary conditions. The simulations indicate that there is no substantial difference in the transport for the two conditions tested. However, the values of dispersivity used in the unsaturated zone caused more transport above the water table than has been observed under actual conditions. This deficiency should be corrected and further comparisons should be made before average rainfall boundary conditions are used for long-term transport simulations

  15. Online Self-Organizing Network Control with Time Averaged Weighted Throughput Objective

    Directory of Open Access Journals (Sweden)

    Zhicong Zhang

    2018-01-01

    Full Text Available We study an online multisource multisink queueing network control problem characterized with self-organizing network structure and self-organizing job routing. We decompose the self-organizing queueing network control problem into a series of interrelated Markov Decision Processes and construct a control decision model for them based on the coupled reinforcement learning (RL architecture. To maximize the mean time averaged weighted throughput of the jobs through the network, we propose a reinforcement learning algorithm with time averaged reward to deal with the control decision model and obtain a control policy integrating the jobs routing selection strategy and the jobs sequencing strategy. Computational experiments verify the learning ability and the effectiveness of the proposed reinforcement learning algorithm applied in the investigated self-organizing network control problem.

  16. [Conversional and endoscopic procedures following bariatric surgery].

    Science.gov (United States)

    Zorron, R; Bothe, C; Junghans, T; Pratschke, J; Benzing, C; Krenzien, F

    2016-10-01

    The Roux-en-Y gastric bypass (RYGB) is the therapy of choice in bariatric surgery. Sleeve gastrectomy and gastric banding are showing higher rates of treatment failure, reducing obesity-associated morbidity and body weight insufficiently. Moreover, gastroesophageal reflux disease (GERD) can occur refractory to medication. Therefore, a laparoscopic conversion to RYGB can be reasonable as long as specific conditions are fulfilled.Endoscopic procedures are currently being applied to revise bariatric procedures. Therapy failure following RYGB occurs in up to 20 % of cases. Transoral outlet reduction is the minimally invasive method of choice to reduce gastrojejunal anastomosis of the alimentary limb. The diameter of a gastric sleeve can be unwantedly enlarged as well; that can be reduced by placement of a longitudinal full-thickness suture.Severe hypoglycemic episodes can be present in patients following RYGB. Hypoglycemic episodes have to be diagnosed first and can be treated conventionally. Alternatively, a laparoscopic approach according to Branco-Zorron can be used for non-responders. Hypoglycemic episodes can thus be prevented and body weight reduction can be assured.Conversional and endoscopic procedures can be used in patients with treatment failure following bariatric surgery. Note that non-invasive approaches should have been applied intensively before a revisional procedure is performed.

  17. Applied in vitro radio bioassay

    International Nuclear Information System (INIS)

    Gaburo, J.C.G.; Sordi, G.M.A.A.

    1992-11-01

    The aim of this publication is to show the concepts and in vitro bioassay techniques as well as experimental procedures related with internal contamination evaluation. The main routes of intake, metabolic behavior, and the possible types of bioassay samples that can be collected for radionuclides analysis are described. Both biological processes and the chemical and physical behavior of the radioactive material of interest are considered and the capabilities of analytical techniques to detect and quantify the radionuclides are discussed. Next, the need of quality assurance throughout procedures are considered and finally a summary of the techniques applied to the internal routine monitoring of IPEN workers is given. (author)

  18. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  19. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  20. Dosimetric evaluation of mammary tomosynthesis procedures

    International Nuclear Information System (INIS)

    Silva, Rayre Janaína Vieira; Perini, Ana Paula; Santos, William de Souza; Vedovato, Uly P.; Neves, Lucio Pereira

    2017-01-01

    This work presents the results of the research on the evaluation of radiation doses usually applied in mammary procedures, using the Monte Carlo method. A virtual environment was created, to mimic the procedures room, including the room, its components, patient and source. The spectrum was obtained from the literature. The percentage of energy deposited compared to energy deposited in the breast was determined, and the scattered radiation was absorbed in specific areas. The regions of the head and neck were the most affected by scattered radiation. (author)

  1. Impression Procedures for Metal Frame Removable Partial Dentures as Applied by General Dental Practitioners.

    NARCIS (Netherlands)

    Fokkinga, W.A.; Uchelen, J. van; Witter, D.J.; Mulder, J.; Creugers, N.H.J.

    2016-01-01

    This pilot study analyzed impression procedures for conventional metal frame removable partial dentures (RPDs). Heads of RPD departments of three dental laboratories were asked to record features of all incoming impressions for RPDs during a 2-month period. Records included: (1) impression

  2. Single-stage Modified Duhamel procedure for Hirschsprung′s disease : Our experience

    Directory of Open Access Journals (Sweden)

    Paras R Kothari

    2012-01-01

    Full Text Available Introduction: Primary single-stage pull-through for Hirschsprung′s disease (HD has been reported to give comparable surgical outcomes to staged operations with less morbidity. Herein, we present our experience with single-stage Modified Duhamel procedure for management of HD. Patients and Methods: This was a review of 48 cases of HD who underwent single-stage Modified Duhamel procedure without a protective colostomy. Results: The age at surgery ranged from 6 months to 10 years (median - 9 months, mean - 2.3 years. The average weight of the child was 7.2 kg (range, 4.9-22 kg. 38 (79.2% patients had classical rectosigmoid HD, the rest being long segment HD (the proximal most level being the splenic flexure. The average duration of surgery was 175 minutes (range, 130-245 minutes. The average blood loss was 45 ml. The average hospital stay was 7.2 days (range: 6-10 days. The major postoperative complications (n=3 included postoperative adhesive intestinal obstruction, anastomotic leak and persistent constipation due to residual aganglionosis. Each required a re-exploration. Minor complications included surgical site infection (n=3 and post-operative enterocolitis (n=3, which were managed conservatively. Six patients had constipation for a limited period post-operatively. All patients have a satisfactory functional outcome and normal development and growth. Conclusions: For HD, we recommend that single-stage Modified Duhamel procedure should be the preferred approach in view of its low morbidity, satisfactory functional outcome and avoidance of stoma, multiple surgeries and economic benefit in view of decreased hospital stay.

  3. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    Science.gov (United States)

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  4. Endovascular thrombectomy and post-procedural headache.

    Science.gov (United States)

    Khan, Sabrina; Amin, Faisal Mohammad; Holtmannspötter, Markus; Hansen, Klaus; Florescu, Anna Maria; Fakhril-Din, Zainab; Petersen, Julie Falkenberg; Ghanizada, Hashmat; Ayata, Cenk; Gaist, David; Ashina, Messoud

    2017-12-01

    We investigated the prevalence of post-procedural headache in patients who have undergone thrombectomy for ischemic stroke, and correlated history of migraine with risk of peri-procedural complications. A total of 314 patients underwent thrombectomy at the Danish National Hospital from January 2012 to December 2014. Eligible subjects were phone-interviewed using a purpose-developed semi-structured questionnaire according to the International Classification of Headache Disorders 3, beta version criteria. Among 96 eligible subjects, there was a significant decrease in migraine (p = 0.022) within the first 3 months after EVT compared to 1 year before treatment, which was further evident at interview time (on average 1.6 years after EVT, p = 0.013). A minority of patients experienced headaches for the first time within 3 months of their EVT (migraine 2, TTH 9), which persisted at interview time for subjects with migraine. Out of 12 subjects with peri-procedural complications, 2 had a history of migraine with aura. Thrombectomy leads to a significant decrease in previously known migraine, and new onset of headache in a small subset of patients. A history of migraine does not appear to predispose to peri-procedural complications.

  5. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  6. A benchmarking procedure for PIGE related differential cross-sections

    Science.gov (United States)

    Axiotis, M.; Lagoyannis, A.; Fazinić, S.; Harissopulos, S.; Kokkoris, M.; Preketes-Sigalas, K.; Provatas, G.

    2018-05-01

    The application of standard-less PIGE requires the a priori knowledge of the differential cross section of the reaction used for the quantification of each detected light element. Towards this end, a lot of datasets have been published the last few years from several laboratories around the world. The discrepancies often found between different measured cross sections can be resolved by applying a rigorous benchmarking procedure through the measurement of thick target yields. Such a procedure is proposed in the present paper and is applied in the case of the 19F(p,p‧ γ)19F reaction.

  7. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  8. 10 CFR 708.8 - Does this part apply to pending cases?

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Does this part apply to pending cases? 708.8 Section 708.8... this part apply to pending cases? The procedures in this part apply prospectively in any complaint proceeding pending on the effective date of this part. ...

  9. Comparison of different spatial transformations applied to EEG data: A case study of error processing.

    Science.gov (United States)

    Cohen, Michael X

    2015-09-01

    The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Reductions in Average Lengths of Stays for Surgical Procedures Between the 2008 and 2014 United States National Inpatient Samples Were Not Associated With Greater Incidences of Use of Postacute Care Facilities.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H

    2018-03-01

    Diagnosis-related group (DRG) based reimbursement creates incentives for reduction in hospital length of stay (LOS). Such reductions might be accomplished by lesser incidences of discharges to home. However, we previously reported that, while controlling for DRG, each 1-day decrease in hospital median LOS was associated with lesser odds of transfer to a postacute care facility (P = .0008). The result, though, was limited to elective admissions, 15 common surgical DRGs, and the 2013 US National Readmission Database. We studied the same potential relationship between decreased LOS and postacute care using different methodology and over 2 different years. The observational study was performed using summary measures from the 2008 and 2014 US National Inpatient Sample, with 3 types of categories (strata): (1) Clinical Classifications Software's classes of procedures (CCS), (2) DRGs including a major operating room procedure during hospitalization, or (3) CCS limiting patients to those with US Medicare as the primary payer. Greater reductions in the mean LOS were associated with smaller percentages of patients with disposition to postacute care. Analyzed using 72 different CCSs, 174 DRGs, or 70 CCSs limited to Medicare patients, each pairwise reduction in the mean LOS by 1 day was associated with an estimated 2.6% ± 0.4%, 2.3% ± 0.3%, or 2.4% ± 0.3% (absolute) pairwise reduction in the mean incidence of use of postacute care, respectively. These 3 results obtained using bivariate weighted least squares linear regression were all P < .0001, as were the corresponding results obtained using unweighted linear regression or the Spearman rank correlation. In the United States, reductions in hospital LOS, averaged over many surgical procedures, are not accomplished through a greater incidence of use of postacute care.

  11. Average System Cost Methodology : Administrator's Record of Decision.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  12. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  13. The geometrically averaged density of states calculated from the local Green's function as a measure of localization

    International Nuclear Information System (INIS)

    Wortis, R.; Song Yun; Atkinson, W.A.

    2008-01-01

    With the goal of measuring localization in disordered interacting systems, we examine the finite-size scaling of the geometrically averaged density of states calculated from the local Green's function with finite energy resolution. Our results show that, unlike in a simple energy binning procedure, there is no limit in which the finite energy resolution is irrelevant

  14. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  15. A Methods and procedures to apply probabilistic safety Assessment (PSA) techniques to the cobalt-therapy process. Cuban experience

    International Nuclear Information System (INIS)

    Vilaragut Llanes, J.J.; Ferro Fernandez, R.; Lozano Lima, B; De la Fuente Puch, A.; Dumenigo Gonzalez, C.; Troncoso Fleitas, M.; Perez Reyes, Y.

    2003-01-01

    This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a

  16. On the Averaging of Cardiac Diffusion Tensor MRI Data: The Effect of Distance Function Selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-01-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) Metrics were judged by quantitative –rather than qualitative– criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the “swelling effect” occurrence following Euclidean averaging was found to be too unimportant to be worth consideration. PMID:27754986

  17. On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-11-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.

  18. Entrance surface dose in cerebral interventional radiology procedures

    Energy Technology Data Exchange (ETDEWEB)

    Barrera-Rico, M.; Lopez-Rendon, X.; Rivera-Ordonez, C. E.; Gamboa-deBuen, I. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, 04510 DF (Mexico); Instituto Nacional de Neurologia y Neurocirugia Manuel Velasco Suarez, 14269 DF (Mexico); Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, 04510 DF (Mexico)

    2012-10-23

    At the Instituto Nacional de Neurologia y Neurocirugia (INNN) diagnostic as well as therapeutic procedures of interventional radiology are carried out. Since the procedures can last from some minutes to several hours, the absorbed dose for the patient could increase dangerously. An investigation had begun in order to determine the entrance surface dose (ESD) using 25 thermoluminiscent dosimeters TLD-100 and 8 strips of 15 Multiplication-Sign 1 cm{sup 2} of Gafchromic XR-QA2 film bound in a holder of 15 Multiplication-Sign 15 cm{sup 2} in the posteroanterior (PA) and lateral (LAT) positions during all the procedure. The results show that maximum ESD could be from 0.9 to 2.9 Gy for the PA position and between 1.6 and 2.5 Gy for the lateral position. The average ESD was between 0.7 and 1.3 Gy for the PA position, and from 0.44 to 1.1 Gy for the lateral position in a therapeutic procedure.

  19. Entrance surface dose in cerebral interventional radiology procedures

    International Nuclear Information System (INIS)

    Barrera-Rico, M.; López-Rendón, X.; Rivera-Ordóñez, C. E.; Gamboa-deBuen, I.

    2012-01-01

    At the Instituto Nacional de Neurología y Neurocirugía (INNN) diagnostic as well as therapeutic procedures of interventional radiology are carried out. Since the procedures can last from some minutes to several hours, the absorbed dose for the patient could increase dangerously. An investigation had begun in order to determine the entrance surface dose (ESD) using 25 thermoluminiscent dosimeters TLD-100 and 8 strips of 15 ×1 cm 2 of Gafchromic XR-QA2 film bound in a holder of 15×15 cm 2 in the posteroanterior (PA) and lateral (LAT) positions during all the procedure. The results show that maximum ESD could be from 0.9 to 2.9 Gy for the PA position and between 1.6 and 2.5 Gy for the lateral position. The average ESD was between 0.7 and 1.3 Gy for the PA position, and from 0.44 to 1.1 Gy for the lateral position in a therapeutic procedure.

  20. Application of Real-Time 3D Navigation System in CT-Guided Percutaneous Interventional Procedures: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    Priya Bhattacharji

    2017-01-01

    Full Text Available Introduction. To evaluate the accuracy of a quantitative 3D navigation system for CT-guided interventional procedures in a two-part study. Materials and Methods. Twenty-two procedures were performed in abdominal and thoracic phantoms. Accuracies of the 3D anatomy map registration and navigation were evaluated. Time used for the navigated procedures was recorded. In the IRB approved clinical evaluation, 21 patients scheduled for CT-guided thoracic and hepatic biopsy and ablations were recruited. CT-guided procedures were performed without following the 3D navigation display. Accuracy of navigation as well as workflow fitness of the system was evaluated. Results. In phantoms, the average 3D anatomy map registration error was 1.79 mm. The average navigated needle placement accuracy for one-pass and two-pass procedures, respectively, was 2.0±0.7 mm and 2.8±1.1 mm in the liver and 2.7±1.7 mm and 3.0±1.4 mm in the lung. The average accuracy of the 3D navigation system in human subjects was 4.6 mm ± 3.1 for all procedures. The system fits the existing workflow of CT-guided interventions with minimum impact. Conclusion. A 3D navigation system can be performed along the existing workflow and has the potential to navigate precision needle placement in CT-guided interventional procedures.

  1. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  2. Operating procedures for the Pajarito Site Critical Assembly Facility

    International Nuclear Information System (INIS)

    Malenfant, R.E.

    1983-03-01

    Operating procedures consistent with DOE Order 5480.2, Chapter VI, and the American National Standard Safety Guide for the Performance of Critical Experiments are defined for the Pajarito Site Critical Assembly Facility of the Los Alamos National Laboratory. These operating procedures supersede and update those previously published in 1973 and apply to any criticality experiment performed at the facility

  3. A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.

    Science.gov (United States)

    Płotka-Wasylka, J

    2018-05-01

    A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Laparoscopic Whipple procedure: review of the literature.

    Science.gov (United States)

    Gagner, Michel; Palermo, Mariano

    2009-01-01

    Laparoscopic pancreatic surgery represents one of the most advanced applications for laparoscopic surgery currently in use. In the past, minimally invasive techniques were only used for diagnostic laparoscopy, staging of pancreatic cancer, and palliative procedures for unresectable pancreatic cancer. With new advances in technology and instrumentation, some sophisticated procedures are currently available, such as the Whipple procedure, one of the most sophisticated applications of minimally invasive surgery. A review of the literature shows that 146 laparoscopic Whipple procedures have been published worldwide since 1994. The authors analyzed blood loss, mean operating time, hospital stay, conversion rate, mean age, mortality rate, lymph nodes in the pathologic findings, follow up, and complications. Mean age was 59.1 years; mean operating time was 439 min. The average blood loss for the reviewed literature was 143 mL; median hospital stay was 18 days; conversion rate was 46%; number of lymph nodes in the pathologic findings was 19; and mortalities related to the procedure was low, 2 patients (1.3%) and the complication rate was 16% (23/46 patients). Complications included 2 hemorrhages, 4 bowel obstructions, 1 stress ulcer, 1 delay of gastric emptying, 4 pneumonias, and 11 leaks. This review demonstrates that the laparoscopic Whipple procedure is not only feasible but also safe, with low mortality and acceptable rates of complications.

  5. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  6. Efficient Reanalysis Procedures in Structural Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded

    This thesis examines efficient solution procedures for the structural analysis problem within topology optimization. The research is motivated by the observation that when the nested approach to structural optimization is applied, most of the computational effort is invested in repeated solutions...... on approximate reanalysis. For cases where memory limitations require the utilization of iterative equation solvers, we suggest efficient procedures based on alternative termination criteria for such solvers. These approaches are tested on two- and three-dimensional topology optimization problems including...

  7. [Knapp procedure and modified Knapp procedure with Foster suture for the treatment of double elevator palsy].

    Science.gov (United States)

    Li, Y P; Zhang, W; Ding, J; Ma, H Z; Zhao, K X

    2017-12-11

    Objective: To investigate the efficacy of Knapp procedure and modified Knapp procedure with Foster suture in the treatment of double elevator palsy(DEP). Methods: Retrospective study. Twenty-two patients with congenital DEP were underwent Knapp procedure ( n= 15) and modified Knapp procedure( n= 7). The clinical data were retrospectively analyzed, including the preoperative and postoperative vertical deviation in the primary position, ocular motility, and binocular vision. Results: The average vertical deviation in the primary position was (34.7±8.6) prism diopters(PD) before surgery and (6.5±6.5) PD after surgery ( t= 30.41, P= 0.00) in the group underwent Knapp procedure. The mean preoperative and postoperative deviations of the group underwent modified procedure were respectively (38.6±14.6) PD and (5.7±9.3)PD ( t= 15.33, P= 0.00). The mean corrected vertical deviation of the latter (32.8±5.7) PD was greater than that of the former (28.1±3.6) PD( t=- 2.39, P= 0.03). The mean improved upgaze in the modified group (2.6±0.5) was more obvious than that in the Knapp group (1.9±0.6) ( t= 2.41, P= 0.02). There is no significant difference in the surgical effect on downgaze between two groups ( U =43.00, P= 0.54). Seven patients having binocular vision with abnormal head posture (AHP) before surgery obtained binocular single vision in the primary position and reading position after operations. AHP disappeared or reduced to less than 5°. The surgical outcomes were satisfied in 72.7% patients. But the patients with ≥40 PD preoperative vertical deviation were under-corrected and needed the further operations. Conclusions: Knapp procedure and modified Knapp procedure with Foster suture were the efficient procedures for treatment of DEP without restriction of ipsilateral inferior rectus. Both procedures can obviously correct the vertical deviation and improve upgaze without remarkable limitation of downgaze, which is good to obtain the binocular single vision in

  8. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  9. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  10. 16 CFR 1510.4 - Test procedure.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Test procedure. 1510.4 Section 1510.4 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS... plane surface. Under its own weight and in a non-compressed state apply any portion of the test sample...

  11. 75 FR 32273 - Representation Election Procedure

    Science.gov (United States)

    2010-06-08

    ... Representation Election Procedure AGENCY: National Mediation Board. ACTION: Final rule; delay of effective date. SUMMARY: The National Mediation Board (NMB) is delaying the effective date of its rule regarding... participants under the Railway Labor Act (RLA) that the rule will apply to applications filed on or after June...

  12. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  13. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  14. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  15. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  16. Optimization of procedure for calibration with radiometer/photometer

    International Nuclear Information System (INIS)

    Detilly, Isabelle

    2009-01-01

    A test procedure for the radiometer/photometer calibrations mark International Light at the Laboratorio de Fotometria y Tecnologia Laser (LAFTA) de la Escuela de Ingenieria Electrica de la Universidad de Costa Rica is established. Two photometric banks are used as experimental set and two calibrations were performed of the International Light. A basic procedure established in the laboratory, is used for calibration from measurements of illuminance and luminous intensity. Some dependent variations of photometric banks used in the calibration process, the programming of the radiometer/photometer and the applied methodology showed the results. The procedure for calibration with radiometer/photometer can be improved by optimizing the programming process of the measurement instrument and possible errors can be minimized by using the recommended procedure. (author) [es

  17. A Primer on Disseminating Applied Quantitative Research

    Science.gov (United States)

    Bell, Bethany A.; DiStefano, Christine; Morgan, Grant B.

    2010-01-01

    Transparency and replication are essential features of scientific inquiry, yet scientific communications of applied quantitative research are often lacking in much-needed procedural information. In an effort to promote researchers dissemination of their quantitative studies in a cohesive, detailed, and informative manner, the authors delineate…

  18. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  19. The concept of the average stress in the fracture process zone for the search of the crack path

    Directory of Open Access Journals (Sweden)

    Yu.G. Matvienko

    2015-10-01

    Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.

  20. New procedure for departure formalities

    CERN Multimedia

    HR & GS Departments

    2011-01-01

    As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...

  1. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Science.gov (United States)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  2. Interactive stereotaxic teleassistance of remote experts during arthroscopic procedures.

    Science.gov (United States)

    Wagner, Arne; Undt, Gerhard; Schicho, Kurt; Wanschitz, Felix; Watzinger, Franz; Murakami, Kenichiro; Czerny, Christian; Ewers, Rolf

    2002-01-01

    This article describes the technical setup for stereotaxic telesurgical assistance for arthroscopic procedures. It also outlines the current state, limitations, and feasibility of this technical development. Teleassistance or teleconsultation implemented in endoscopic or arthroscopic procedures have not yet been reported. In this study, 7 computer-assisted arthroscopies of the temporomandibular joint were supported by extramural experts via interactive stereotaxic teleconsultation from distant locations. The external experts were supplied with close to real-time video, audio, and stereotaxic navigation data directly from the operation site. This setup allows the surgeons and external experts to interactively determine portals, target structures, and instrument positions relative to the patient's anatomy and to discuss any step of the procedures. Optoelectronic tracking interfaced to computer- based navigation technology allowed precise positioning of instruments for single or multiple temporomandibular joint punctures. The average error of digitizing probe measurements was 1.3 mm (range, 0.0 to 2.5 mm) and the average standard deviation was 0.7 mm (range, 0.4 to 0.9 mm). Evaluation of the reliability and accuracy of this technique suggests that it is sufficient for controlled navigation, even inside the small temporomandibular joint, a fact that encourages further applications for arthroscopy in general. The minimum requirement for high-quality video transmission for teleassisted procedures are integrated services digital network (ISDN) connections. Conventional ISDN-based videoconferencing can be combined with computer-aided intraoperative navigation. Transmission control protocol/internet protocol (TCP/IP)-based stereotaxic teleassistance data transmission via ATM or satellite seem to be promising techniques to considerably improve the field of arthroscopy.

  3. Size and power of pretest procedures

    NARCIS (Netherlands)

    Albers, Willem/Wim; Boon, P.C.; Kallenberg, W.C.M.

    2000-01-01

    A pre-test procedure consists of a preliminary test on a nuisance parameter, investigating whether it equals a given value or not, followed by the main testing problem on the parameter of interest. In case of acceptance of the preliminary test, the main test is applied in the restricted family with

  4. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  5. Application of average adult Japanese voxel phantoms to evaluation of photon specific absorbed fractions

    International Nuclear Information System (INIS)

    Sato, Kaoru; Manabe, Kentaro; Endo, Akira

    2012-01-01

    Average adult Japanese male (JM-103) and female (JF-103) voxel (volume pixel) phantoms newly constructed at the Japan Atomic Energy Agency (JAEA) have average characteristics of body sizes and organ masses in adult Japanese. In JM-103 and JF-103, several organs and tissues were newly modeled for dose assessments based on tissue weighting factors of the 2007 Recommendations of the International Commission on Radiological Protection(ICRP). In this study, SAFs for thyroid, stomach, lungs and lymphatic nodes of JM-103 and JF-103 phantoms were calculated, and were compared with those of other adult Japanese phantoms based on individual medical images. In most cases, differences in SAFs between JM-103, JF-103 and other phantoms were about several tens percent, and was mainly attributed to mass differences of organs, tissues and contents. Therefore, it was concluded that SAFs of JM-103 and JF-103 represent those of average adult Japanese and that the two phantoms are applied to dose assessment for average adult Japanese on the basis of the 2007 Recommendations. (author)

  6. Procedural learning is impaired in dyslexia: Evidence from a meta-analysis of serial reaction time studies☆

    Science.gov (United States)

    Lum, Jarrad A.G.; Ullman, Michael T.; Conti-Ramsden, Gina

    2013-01-01

    A number of studies have investigated procedural learning in dyslexia using serial reaction time (SRT) tasks. Overall, the results have been mixed, with evidence of both impaired and intact learning reported. We undertook a systematic search of studies that examined procedural learning using SRT tasks, and synthesized the data using meta-analysis. A total of 14 studies were identified, representing data from 314 individuals with dyslexia and 317 typically developing control participants. The results indicate that, on average, individuals with dyslexia have worse procedural learning abilities than controls, as indexed by sequence learning on the SRT task. The average weighted standardized mean difference (the effect size) was found to be 0.449 (CI95: .204, .693), and was significant (p dyslexia. PMID:23920029

  7. Orbit-averaged quantities, the classical Hellmann-Feynman theorem, and the magnetic flux enclosed by gyro-motion

    Energy Technology Data Exchange (ETDEWEB)

    Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M. [Applied Physics and Materials Science, California Institute of Technology, Pasadena, California 91125 (United States)

    2015-02-15

    Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle in a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.

  8. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing; Elbaz, Ayman M.; Roberts, William L.; Im, Hong G.

    2016-01-01

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing

  9. Development of ISA procedure for uranium fuel fabrication and enrichment facilities: overview of ISA procedure and its application

    International Nuclear Information System (INIS)

    Yamate, Kazuki; Yamada, Takashi; Takanashi, Mitsuhiro; Sasaki, Noriaki

    2013-01-01

    Integrated Safety Analysis (ISA) procedure for uranium fuel fabrication and enrichment facilities has been developed for aiming at applying risk-informed regulation to these uranium facilities. The development has carried out referring to the ISA (NUREG-1520) by the Nuclear Regulatory Commission (NRC). The paper presents purpose, principles and activities for the development of the ISA procedure, including Risk Level (RL) matrix and grading evaluation method of IROFS (Items Relied on for Safety), as well as general description and features of the procedure. Also described in the paper is current status in application of risk information from the ISA. Japanese four licensees of the uranium facilities have been conducting ISA for their representative processes using the developed procedure as their voluntary safety activities. They have been accumulating experiences and knowledge on the ISA procedure and risk information through the field activities. NISA (Nuclear and Industrial Safety Agency) and JNES (Japan Nuclear Energy Safety Organization) are studying how to use such risk information for the safety regulation of the uranium facilities, taking into account the licensees' experiences and knowledge. (authors)

  10. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  11. Less Physician Practice Competition Is Associated With Higher Prices Paid For Common Procedures.

    Science.gov (United States)

    Austin, Daniel R; Baker, Laurence C

    2015-10-01

    Concentration among physician groups has been steadily increasing, which may affect prices for physician services. We assessed the relationship in 2010 between physician competition and prices paid by private preferred provider organizations for fifteen common, high-cost procedures to understand whether higher concentration of physician practices and accompanying increased market power were associated with higher prices for services. Using county-level measures of the concentration of physician practices and county average prices, and statistically controlling for a range of other regional characteristics, we found that physician practice concentration and prices were significantly associated for twelve of the fifteen procedures we studied. For these procedures, counties with the highest average physician concentrations had prices 8-26 percent higher than prices in the lowest counties. We concluded that physician competition is frequently associated with prices. Policies that would influence physician practice organization should take this into consideration. Project HOPE—The People-to-People Health Foundation, Inc.

  12. A Comparison of Standard-Setting Procedures for an OSCE in Undergraduate Medical Education.

    Science.gov (United States)

    Kaufman, David M.; Mann, Karen V.; Muijtjens, Arno M. M.; van der Vleuten, Cees P. M.

    2000-01-01

    Compared four standard-setting procedures for an objective structure clinical examination (OSCE) in medical education. Applied Angoff, borderline, relative, and holistic procedures to the data used to establish a cutoff score for a pass/fail decision. The Angoff and borderline procedures gave similar results; however, the relative and holistic…

  13. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  14. Assessment of Conventional Teaching Procedures: Implications for Gifted Learners

    Science.gov (United States)

    Alenizi, Mogbel Aid K.

    2016-01-01

    The present research aims to assess the conventional teaching procedures in the development of mathematical skills of the students with learning difficulties. The study group was made up of all the children with academic learning disorders in KSA. The research questions have been scrutinized from the averages and the standard deviation of the…

  15. 26 CFR 48.6416(h)-1 - Accounting procedures for like articles.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 16 2010-04-01 2010-04-01 true Accounting procedures for like articles. 48.6416... procedures for like articles. (a) Identification of manufacturer. In applying section 6416 and the regulations thereunder, a person who has purchased like articles from various manufacturers may determine the...

  16. Quantifying walking and standing behaviour of dairy cows using a moving average based on output from an accelerometer

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Pedersen, Asger Roer; Herskin, Mette S

    2010-01-01

    in sequences of approximately 20 s for the period of 10 min. Afterwards the cows were stimulated to move/lift the legs while standing in a cubicle. The behaviour was video recorded, and the recordings were analysed second by second for walking and standing behaviour as well as the number of steps taken....... Various algorithms for predicting walking/standing status were compared. The algorithms were all based on a limit of a moving average calculated by using one of two outputs of the accelerometer, either a motion index or a step count, and applied over periods of 3 or 5 s. Furthermore, we investigated...... the effect of additionally applying the rule: a walking period must last at least 5 s. The results indicate that the lowest misclassification rate (10%) of walking and standing was obtained based on the step count with a moving average of 3 s and with the rule applied. However, the rate of misclassification...

  17. Performance and Health Test Procedure for Grid Energy Storage Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Baggu, Murali M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Friedl, Andrew [San Diego Gas and Electric; Bialek, Thomas [San Diego Gas and Electric; Schimpe, Michael Robert [Technical University of Munich

    2017-07-27

    A test procedure to evaluate the performance and health of field installations of grid-connected battery energy storage systems (BESS) is described. Performance and health metrics captured in the procedures are: Round-trip efficiency, Standby losses, Response time/accuracy, and Useable Energy/ State of Charge at different discharge/charge rates over the system's lifetime. The procedures are divided into Reference Performance Tests, which require the system to be put in a test mode and are to be conducted in intervals, and Real-time Monitoring tests, which collect data during normal operation without interruption. The procedures can be applied on a wide array of BESS with little modifications and can thus support BESS operators in the management of BESS field installations with minimal interruption and expenditures.can be applied on a wide array of BESS with little modifications and can thus support BESS operators in the management of BESS field installations with minimal interruption and expenditures.

  18. Endovascular thrombectomy and post-procedural headache

    DEFF Research Database (Denmark)

    Khan, Sabrina; Amin, Faisal Mohammad; Holtmannspötter, Markus

    2017-01-01

    BACKGROUND: We investigated the prevalence of post-procedural headache in patients who have undergone thrombectomy for ischemic stroke, and correlated history of migraine with risk of peri-procedural complications. A total of 314 patients underwent thrombectomy at the Danish National Hospital from...... January 2012 to December 2014. Eligible subjects were phone-interviewed using a purpose-developed semi-structured questionnaire according to the International Classification of Headache Disorders 3, beta version criteria. FINDINGS: Among 96 eligible subjects, there was a significant decrease in migraine...... (p = 0.022) within the first 3 months after EVT compared to 1 year before treatment, which was further evident at interview time (on average 1.6 years after EVT, p = 0.013). A minority of patients experienced headaches for the first time within 3 months of their EVT (migraine 2, TTH 9), which...

  19. Aortic root reimplantation procedure: a new milestone in aortic valve-sparing operations

    Directory of Open Access Journals (Sweden)

    А. М. Чернявский

    2016-08-01

    Full Text Available Aim: Emphasis in this study was placed on clinical and functional assessment of a modified "Florida Sleeve" procedure during surgical correction of ascending aorta aneurysms with concomitant aortic insufficiency.Methods: 32 patients with an aneurysm of the ascending aorta and aortic insufficiency underwent a modified "Florida Sleeve" procedure. The average follow-up was 17 (0-60 months. The average age of patients was 57±13 (23-73 years 56±13 years.Results: The expected 4-year cumulative survival rate was 84.3%. Overall freedom from aortic insufficiency in the late period was 88.9%. Median aortic regurgitation was 1+ (1; 2. Long-term follow-up revealed no valve-associated complications.Conclusion: The aortic root reimplantation procedure enables optimal correction of the existing lesions of the aortic root without performing aortic valve replacement and demonstrates stable clinical and functional outcomes in the long-term period.Key words: aortic aneurysm; aortic valve; valve-sparing operations.FundingThe study had no sponsorship.Conflict of interestThe authors declare no conflict of interest.

  20. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  1. Phantoms for quality control procedures in digital breast tomosynthesis: dose assessment

    NARCIS (Netherlands)

    Bouwman, R. W.; Diaz, O.; van Engen, R. E.; Young, K. C.; den Heeten, G. J.; Broeders, M. J. M.; Veldkamp, W. J. H.; Dance, D. R.

    2013-01-01

    The recent introduction of digital breast tomosynthesis into clinical practice requires quality control procedures. In this study we have investigated whether the assessment of the average glandular dose for modelled standard breasts can be performed using a combination of polymethyl methacrylate

  2. Exfoliative Skin-peeling, Benefits from This Procedure and Our Experience

    OpenAIRE

    Grajqevci-Kotori, Merita; Kocinaj, Allma

    2015-01-01

    The peeling procedure is a valuable method for the aged skin, photo aging, acne scars and melasma. It should be performed by dermatologist. Pre peeling preparation of the skin is very important to prepare the skin for this procedure. It can be archived by applying the mask with acids twice a week. The caring of the skin after procedure is also very important for the success of the treatment. Same cases may have complication but managing them is also a challenge for dermatologist.

  3. Exfoliative Skin-peeling, Benefits from This Procedure and Our Experience.

    Science.gov (United States)

    Grajqevci-Kotori, Merita; Kocinaj, Allma

    2015-12-01

    The peeling procedure is a valuable method for the aged skin, photo aging, acne scars and melasma. It should be performed by dermatologist. Pre peeling preparation of the skin is very important to prepare the skin for this procedure. It can be archived by applying the mask with acids twice a week. The caring of the skin after procedure is also very important for the success of the treatment. Same cases may have complication but managing them is also a challenge for dermatologist.

  4. Discharges with surgical procedures performed less often than once per month per hospital account for two-thirds of hospital costs of inpatient surgery.

    Science.gov (United States)

    O'Neill, Liam; Dexter, Franklin; Park, Sae-Hwan; Epstein, Richard H

    2017-09-01

    Most surgical discharges (54%) at the average hospital are for procedures performed no more often than once per month at that hospital. We hypothesized that such uncommon procedures would be associated with an even greater percentage of the total cost of performing all surgical procedures at that hospital. Observational study. State of Texas hospital discharge abstract data: 4th quarter of 2015 and 1st quarter of 2016. Inpatients discharged with a major therapeutic ("operative") procedure. For each of N=343 hospitals, counts of discharges, sums of lengths of stay (LOS), sums of diagnosis related group (DRG) case-mix weights, and sums of charges were obtained for each procedure or combination of procedures, classified by International Classification of Diseases version 10 Procedure Coding System (ICD-10-PCS). Each discharge was classified into 2 categories, uncommon versus not, defined as a procedure performed at most once per month versus those performed more often than once per month. Major procedures performed at most once per month per hospital accounted for an average among hospitals of 68% of the total inpatient costs associated with all major therapeutic procedures. On average, the percentage of total costs associated with uncommon procedures was 26% greater than expected based on their share of total discharges (Pcosts among surgical patients can be attributed to procedures performed at most once per month per hospital. The finding that such uncommon procedures account for a large percentage of costs is important because methods of cost accounting by procedure are generally unsuitable for them. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  6. Differentiating intraparenchymal hemorrhage from contrast extravasation on post-procedural noncontrast CT scan in acute ischemic stroke patients undergoing endovascular treatment

    Energy Technology Data Exchange (ETDEWEB)

    Payabvash, Seyedmehdi [Zeenat Qureshi Stroke Institute, Minneapolis, MN (United States); University of Minnesota, Department of Radiology, Minneapolis, MN (United States); Qureshi, Mushtaq H.; Khan, Shayaan M.; Khan, Mahnoor; Majidi, Shahram; Pawar, Swaroop; Qureshi, Adnan I. [Zeenat Qureshi Stroke Institute, Minneapolis, MN (United States)

    2014-09-15

    This study aimed to identify the imaging characteristics that can help differentiate intraparenchymal hemorrhage from benign contrast extravasation on post-procedural noncontrast CT scan in acute ischemic stroke patients after endovascular treatment. We reviewed the clinical and imaging records of all acute ischemic stroke patients who underwent endovascular treatment in two hospitals over a 3.5-year period. The immediate post-procedural CT scan was evaluated for the presence of hyperdense lesion(s). The average attenuation of the lesion(s) was measured. Intraparenchymal hemorrhage was defined as a persistent hyperdensity visualized on follow-up CT scan, 24 h or greater after the procedure. Of the 135 patients studied, 74 (55 %) patients had hyperdense lesion(s) on immediate post-procedural CT scan. Follow-up scans confirmed the diagnosis of intraparenchymal hemorrhage in 20 of these 74 patients. A receiver operating characteristic analysis showed that the average attenuation of the most hyperdense lesion can differentiate intraparenchymal hemorrhage from contrast extravasation with an area under the curve of 0.78 (p = 0.001). An average attenuation of <50 Hounsfield units (HU) in the most visually hyperattenuating hyperdense lesion had 100 % specificity and 56 % sensitivity for identification of contrast extravasations. Petechial hyperdensity was seen in 46/54 (85 %) patients with contrast extravasation versus 9/20 (45 %) patients with intraparenchymal hemorrhage on the immediate post-procedural CT scan (p < 0.001). An average attenuation <50 HU of the most hyperattenuating hyperdense parenchymal lesion on immediate post-procedural CT scan was very specific for differentiating contrast extravasation from intraparenchymal hemorrhage in acute ischemic stroke patients after endovascular treatment. (orig.)

  7. Differentiating intraparenchymal hemorrhage from contrast extravasation on post-procedural noncontrast CT scan in acute ischemic stroke patients undergoing endovascular treatment

    International Nuclear Information System (INIS)

    Payabvash, Seyedmehdi; Qureshi, Mushtaq H.; Khan, Shayaan M.; Khan, Mahnoor; Majidi, Shahram; Pawar, Swaroop; Qureshi, Adnan I.

    2014-01-01

    This study aimed to identify the imaging characteristics that can help differentiate intraparenchymal hemorrhage from benign contrast extravasation on post-procedural noncontrast CT scan in acute ischemic stroke patients after endovascular treatment. We reviewed the clinical and imaging records of all acute ischemic stroke patients who underwent endovascular treatment in two hospitals over a 3.5-year period. The immediate post-procedural CT scan was evaluated for the presence of hyperdense lesion(s). The average attenuation of the lesion(s) was measured. Intraparenchymal hemorrhage was defined as a persistent hyperdensity visualized on follow-up CT scan, 24 h or greater after the procedure. Of the 135 patients studied, 74 (55 %) patients had hyperdense lesion(s) on immediate post-procedural CT scan. Follow-up scans confirmed the diagnosis of intraparenchymal hemorrhage in 20 of these 74 patients. A receiver operating characteristic analysis showed that the average attenuation of the most hyperdense lesion can differentiate intraparenchymal hemorrhage from contrast extravasation with an area under the curve of 0.78 (p = 0.001). An average attenuation of <50 Hounsfield units (HU) in the most visually hyperattenuating hyperdense lesion had 100 % specificity and 56 % sensitivity for identification of contrast extravasations. Petechial hyperdensity was seen in 46/54 (85 %) patients with contrast extravasation versus 9/20 (45 %) patients with intraparenchymal hemorrhage on the immediate post-procedural CT scan (p < 0.001). An average attenuation <50 HU of the most hyperattenuating hyperdense parenchymal lesion on immediate post-procedural CT scan was very specific for differentiating contrast extravasation from intraparenchymal hemorrhage in acute ischemic stroke patients after endovascular treatment. (orig.)

  8. Use of a Modified Chaining Procedure with Textual Prompts to Establish Intraverbal Storytelling.

    Science.gov (United States)

    Valentino, Amber L; Conine, Daniel E; Delfs, Caitlin H; Furlow, Christopher M

    2015-06-01

    Echoic, tact, and textual transfer procedures have been proven successful in establishing simple intraverbals (Braam and Poling Applied Research in Mental Retardation, 4, 279-302, 1983; Luciano Applied Research in Mental Retardation, 102, 346-357, 1986; Watkins et al. The Analysis of Verbal Behavior, 7, 69-81, 1989). However, these strategies may be ineffective for some children due to the complexity of the targeted intraverbals. The current study investigated the use of a novel procedure which included a modified chaining procedure and textual prompts to establish intraverbal behavior in the form of telling short stories. Visual prompts and rule statements were used with some of the participants in order to produce the desired behavior change. Results indicated that the procedure was effective for teaching retelling of short stories in three children with autism.

  9. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  10. Averaged currents induced by alpha particles in an InSb compound semiconductor detector

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Hishiki, Shigeomi; Kogetsu, Yoshitaka; Nakamura, Tatsuya; Katagiri, Masaki

    2008-01-01

    Very fast pulses due to alpha particle incidence were observed by an undoped-type InSb Schottky detector. This InSb detector was operated without applying bias voltage and its depletion layer thickness was less than the range of alpha particles. The averaged current induced by alpha particles was analyzed as a function of operating temperature and was shown to be proportional to the Hall mobility of InSb. (author)

  11. Procedure for determination of alpha emitters in urine and dregs samples

    International Nuclear Information System (INIS)

    Serdeiro, Nelida H.

    2005-01-01

    The purpose of this work is to establish the procedure for the identification and quantification of emitting alpha radionuclides in urine and dregs samples. This procedure are applied to all laboratories of the countries of the Project ARCAL LXXVII that determinate alpha emitting radionuclides in biological samples for biological assessment [es

  12. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  13. The use of difference spectra with a filtered rolling average background in mobile gamma spectrometry measurements

    International Nuclear Information System (INIS)

    Cresswell, A.J.; Sanderson, D.C.W.

    2009-01-01

    The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.

  14. 45 CFR 98.13 - Applying for Funds.

    Science.gov (United States)

    2010-10-01

    ... General Application Procedures § 98.13 Applying for Funds. The Lead Agency of a State or Territory shall... lobbying payments; (3) An assurance that the Lead Agency provides a drug-free workplace pursuant to 45 CFR... in § 98.17; and (d) Such other information as specified by the Secretary. ...

  15. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    Directory of Open Access Journals (Sweden)

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  16. A procedure for effective Dancoff factor calculation

    International Nuclear Information System (INIS)

    Milosevic, M.

    2001-01-01

    In this paper, a procedure for Dancoff factors calculation based on equivalence principle and its application in the SCALE-4.3 code system is described. This procedure is founded on principle of conservation of neutron absorption for resolved resonance range in a heterogeneous medium and an equivalent medium consisted of an infinite array of two-region pin cells, where the presence of other fuel rods is taken into account through a Dancoff factor. The neutron absorption in both media is obtained using a fine-group elastic slowing-down calculation. This procedure is implemented in a design oriented lattice physics code, which is applicable for any geometry where the method of collision probability is possible to apply to get a flux solution. Proposed procedure was benchmarked for recent exercise that represents a system with a fuel double heterogeneity, i.e., fuel in solid form (pellets) surrounded by fissile material in solution, and for a 5x5 irregular pressurised water reactor assembly, which requires different Dancoff factors. (author)

  17. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  18. Field and laboratory procedures used in a soil chronosequence study

    Science.gov (United States)

    Singer, Michael J.; Janitzky, Peter

    1986-01-01

    obtain and analyze soil samples for this study. The soils analyzed had a wide range of characteristics (such as clay content, mineralogy, salinity, and acidity). Initially, a major task was to test and select methods that could be applied and interpreted similarly for the various types of soils. Tests were conducted to establish the effectiveness and comparability of analytical techniques, and the data for such tests are included in figures, tables, and discussions. In addition, many replicate analyses of samples have established a "standard error" or "coefficient of variance" which indicates the average reproducibility of each laboratory procedure. These averaged errors are reported as percentage of a given value. For example, in particle-size determination, 3 percent error for 10 percent clay content equals 10 ± 0.3 percent clay. The error sources were examined to determine, for example, if the error in particle-size determination was dependent on clay content. No such biases were found, and data are reported as percent error in the text and in tables of reproducibility.

  19. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  20. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  1. Optimization procedures in mammography: First results

    International Nuclear Information System (INIS)

    Espana Lopez, M. L.; Marcos de Paz, L.; Martin Rincon, C.; Jerez Sainz, I.; Lopez Franco, M. P.

    2001-01-01

    Optimization procedures in mammography using equipment with a unique target/filter combination can be carried out through such diverse factors as target optical density, technique factors for exposure, screen film combination or processing cycle, in order to obtain an image adequate for the diagnosis with an acceptable risk benefit balance. Diverse studies show an increase in the Standardised Detection Rate of invasive carcinomas with an increase in the optical density among others factors. In our hospital an optimisation process has been established, and as previous step, the target optical density has been increased up to 1,4 DO. The aim of this paper is to value the impact of optical density variation as much in the quality of image as in the entrance surface dose and the average dose to the glandular tissue, comparing them with the results obtained in a previous study. The study has been carried out in a sample of 106 patients, with an average age of 53,4 years, considering 212 clinical images corresponding to the two projections of a same breast with an average compressed thickness of 4,86 cm. An increase of 16,6% on the entrance surface dose and 18% on the average dose to the glandular tissue has been recorded. All the clinical images has been evaluated for the physician as adequate for diagnosis. (Author) 16 refs

  2. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  3. 40 CFR 60.50Da - Compliance determination procedures and methods.

    Science.gov (United States)

    2010-07-01

    ... probe and filter holder heating system in the sampling train may be set to provide an average gas... correction factor, integrated or grab sampling and analysis procedures of Method 3B of appendix A of this... fuel oil, etc.), coal pulverizers, and bottom and fly ash interactions. This determination is optional...

  4. ANALISIS CURAH HUJAN DAN DEBIT MODEL SWAT DENGAN METODE MOVING AVERAGE DI DAS CILIWUNG HULU

    Directory of Open Access Journals (Sweden)

    Defri Satiya Zuma

    2017-09-01

    Full Text Available Watershed can be regarded as a hydrological system that has a function in transforming rainwater as an input into outputs such as flow and sediment. The transformation of inputs into outputs has specific forms and properties. The transformation involves many processes, including processes occurred on the surface of the land, river basins, in soil and aquifer. This study aimed to apply the SWAT model  in  Ciliwung Hulu Watershed, asses the effect of average rainfall  on 3 days, 5 days, 7 days and 10 days of the hydrological characteristics in Ciliwung Hulu Watershed. The correlation coefficient (r between rainfall and discharge was positive, it indicated that there was an unidirectional relationship between rainfall and discharge in the upstream, midstream and downstream of the watershed. The upper limit ratio of discharge had a downward trend from upstream to downstream, while the lower limit ratio of  discharge had an upward trend from upstream to downstream. It showed that the discharge peak in Ciliwung  Hulu Watershed from upstream to downstream had a downward trend while the baseflow from upstream to downstream had an upward trend. It showed that the upstream of Ciliwung Hulu Watershed had the highest ratio of discharge peak  and baseflow so it needs the soil and water conservations and technical civil measures. The discussion concluded that the SWAT model could be well applied in Ciliwung Hulu Watershed, the most affecting average rainfall on the hydrological characteristics was the average rainfall of 10 days. On average  rainfall of 10 days, all components had contributed maximally for river discharge.

  5. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  6. Effects of average degree of network on an order–disorder transition in opinion dynamics

    International Nuclear Information System (INIS)

    Cun-Fang, Feng; Jian-Yue, Guan; Ying-Hai, Wang; Zhi-Xi, Wu

    2010-01-01

    We have investigated the influence of the average degree (k) of network on the location of an order-disorder transition in opinion dynamics. For this purpose, a variant of majority rule (VMR) model is applied to Watts–Strogatz (WS) small-world networks and Barabási–Albert (BA) scale-free networks which may describe some non-trivial properties of social systems. Using Monte Carlo simulations, we find that the order–disorder transition point of the VMR model is greatly affected by the average degree (k) of the networks; a larger value of (k) results in a more ordered state of the system. Comparing WS networks with BA networks, we find WS networks have better orderliness than BA networks when the average degree (k) is small. With the increase of (k), BA networks have a more ordered state. By implementing finite-size scaling analysis, we also obtain critical exponents β/ν, γ/ν and 1/ν for several values of average degree (k). Our results may be helpful to understand structural effects on order–disorder phase transition in the context of the majority rule model. (general)

  7. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  8. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  9. Calculations of the properties of superconducting alloys via the average T-matrix approximation

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1980-01-01

    The theoretical formula of McMillan, modified via the multiple-scattering theory by Gomersall and Gyorffy, has been very successful in computing the electron-phonon coupling constant (lambda) and the transition temperature (Tsub(c)) of many superconducting elements and compounds. For disordered solids, such as substitutional alloys, however, this theory fails because of the breakdown of the translational symmetry used in the multiple-scattering theory. Under these conditions the problem can still be solved if the t-matrix is averaged in the random phase approximation (average T-matrix approximation). Gomersall and Gyorffy's expression is reformulated for lambda in the random phase approximation. This theory is applied to calculate lambda and Tsub(c) of the binary substitutional NbMo alloy system at different concentrations. The results appear to be in fair agreement with experiments. (author)

  10. On the assessment of extremely low breakdown probabilities by an inverse sampling procedure [gaseous insulation

    DEFF Research Database (Denmark)

    Thyregod, Poul; Vibholm, Svend

    1991-01-01

    the flashover probability function and the corresponding distribution of first breakdown voltages under the inverse sampling procedure, and show how this relation may be utilized to assess the single-shot flashover probability corresponding to the observed average first breakdown voltage. Since the procedure......First breakdown voltages obtained under the inverse sampling procedure assuming a double exponential flashover probability function are discussed. An inverse sampling procedure commences the voltage application at a very low level, followed by applications at stepwise increased levels until...... is based on voltage applications in the neighbourhood of the quantile under investigation, the procedure is found to be insensitive to the underlying distributional assumptions...

  11. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    Science.gov (United States)

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of

  12. Development of quality assurance procedures for production of sealed radiation source

    CERN Document Server

    Nam, J H; Cho, W K; Han, H S; Hong, S B; Kim, K H; Kim, S D; Lee, Y G; Lim, N J

    2001-01-01

    The quality assurance procedures for sealed radiation sources production using HANARO and RIPF have been developed. The detailed quality assurance procedures are essential to manage the whole work process effectively and ensure the quality of the produced sealed sources. Through applying this quality assurance procedures to the entire production works of the sealed radiation sources, it is expected that the quality of the products, the safety of the works and the satisfaction of the customers will be increased.

  13. How Does the Supply Requisitioning Process Affect Average Customer Wait Time Onboard U.S. Navy Destroyers?

    Science.gov (United States)

    2013-06-01

    repairs faster and increase readiness levels across the fleet. Applying a six sigma define, measure, analyze, improve and control ( DMAIC ) process approach...measure, analyze, improve and control ( DMAIC ) process approach, this report describes current procedures from initial demand to issue of repair parts...6  1.  DMAIC ..................................................................................................7  a

  14. 25 CFR 20.600 - Who can apply for financial assistance or social services?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Who can apply for financial assistance or social services... FINANCIAL ASSISTANCE AND SOCIAL SERVICES PROGRAMS Administrative Procedures § 20.600 Who can apply for financial assistance or social services? (a) You can apply for financial assistance or social services under...

  15. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  16. Sample preparation procedure for PIXE elemental analysis on soft tissues

    International Nuclear Information System (INIS)

    Kubica, B.; Kwiatek, W.M.; Dutkiewicz, E.M.; Lekka, M.

    1997-01-01

    Trace element analysis is one of the most important field in analytical chemistry. There are several instrumental techniques which are applied for determinations of microscopic elemental content. The PIXE (Proton Induced X-ray Emission) technique is one of the nuclear techniques that is commonly applied for such purpose due to its multielemental analysis possibilities. The aim of this study was to establish the optimal conditions for target preparation procedure. In this paper two different approaches to the topic are presented and widely discussed. The first approach was the traditional pellet technique and the second one was mineralization procedure. For the analysis soft tissue such as liver was used. Some results are also presented on water samples. (author)

  17. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  18. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  19. Comparison of reynolds averaged navier stokes based simulation and large eddy simulation for one isothermal swirling flow

    DEFF Research Database (Denmark)

    Yang, Yang; Kær, Søren Knudsen

    2012-01-01

    The flow structure of one isothermal swirling case in the Sydney swirl flame database was studied using two numerical methods. Results from the Reynolds-averaged Navier-Stokes (RANS) approach and large eddy simulation (LES) were compared with experimental measurements. The simulations were applied...

  20. 78 FR 15807 - Energy Conservation Program: Test Procedures for Television Sets

    Science.gov (United States)

    2013-03-12

    ... displaying dynamic visual content from wired or wireless sources including but not limited to: * * *''. DOE...) standard ``CEA-2037-A, Determination of Television Average Power Consumption'' into the SNOPR. In today's SNOPR, DOE proposes to update the input power requirements in the TVs test procedure NOPR by referencing...

  1. Premenarchal vaginal discharge: findings of procedures to rule out foreign bodies.

    Science.gov (United States)

    Smith, Yolanda R; Berman, Deborah R; Quint, Elisabeth H

    2002-08-01

    Vaginal discharge in children is a common gynecologic complaint and may be resistant to symptomatic and/or antibiotic treatment. In recurrent or unresponsive patients, an evaluation to rule out a foreign body is traditionally recommended. The objective of this study is to review cases of vaginal discharge referred to our institution and assess outcome and diagnosis in those who required irrigation or vaginoscopy to rule out a foreign body.A retrospective chart review was performed on all premenarchal girls identified through the University of Michigan Pediatric and Adolescent Gynecology Clinic database who were seen for evaluation of vaginal discharge between June 1996 and December 2001. The records were reviewed for age, length of time of discharge, aspects of discharge, procedures done to rule out foreign bodies, and findings of such procedures. The study was performed in a tertiary care university hospital. Forty-one premenarchal girls were evaluated for vaginal discharge. The average age was 6.0 yr (range 3 months-11 yr). The average duration of vaginal discharge prior to presentation was 13.7 months (range 1-42 months). Of the 41 girls, 18 girls underwent 1 procedure each, 2 girls underwent 2 procedures each, and 1 girl underwent 5 procedures. Ten vaginal irrigations in clinic were performed in 7 girls, 3 by the referring physician and 7 by us. These irrigations removed a foreign body (tissue paper) in 4 of 10 (40%) cases, 3 at our institution and 1 at an outside institution. In the three irrigation cases with foreign bodies performed at our institution, the foreign body was visible on genital examination prior to the irrigation. Seventeen vaginoscopies under anesthesia were performed in 16 girls, 5 by the referring physician and 12 by us. In the girls who underwent a vaginoscopy under anesthesia a foreign body was found in 3 of 17 (17.6%). The other findings of the vaginoscopies included: biopsy-proven severe dermatitis with no infection in 1 patient

  2. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  3. Control procedure for well drilling operations

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, J C

    1988-09-09

    A control procedure of rotary drilling operations is proposed. It uses the Drill off test. The drill-off test permits to determine the rock drill speed variation as a function of the wright applied on the top of the pipe. We can deduce from that a rock drill wear parameter. The method permits to prevent a rupture and its grave economic consequences.

  4. Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de

    2003-01-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  5. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  6. Advances in the EDM-DEDM procedure.

    Science.gov (United States)

    Caliandro, Rocco; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Giacovazzo, Carmelo; Mazzone, Anna Maria; Siliqi, Dritan

    2009-03-01

    The DEDM (difference electron-density modification) algorithm has been described in a recent paper [Caliandro et al. (2008), Acta Cryst. A64, 519-528]: it breaks down the collinearity between model structure phases and difference structure phase estimates. The new difference electron-density produced by DEDM, summed to the calculated Fourier maps, is expected to provide a representation of the full structure that is more accurate than that obtained by the observed Fourier synthesis. In the same paper, the DEDM algorithm was combined with the EDM (electron-density modification) approach to give the EDM-DEDM procedure which, when applied to practical molecular-replacement cases, was able to improve the model structures. In this paper, it is shown that EDM-DEDM suffers from some critical points that did not allow cyclical application of the procedure. These points are identified and modifications are made to allow iteration of the procedure. The applications indicate that EDM-DEDM may become a fundamental tool in protein crystallography.

  7. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  8. Simplified Transit Procedure in Railway Transport

    Directory of Open Access Journals (Sweden)

    Željko Kokorović

    2008-11-01

    Full Text Available The current transit procedure in railway transport that iscarried out on the basis of the Customs Act [ 11 of the Republicof Croatia is applied only up to the border, i. e. the issued documentsand guarantees are valid only up to the border, and byjoining the Convention on Common transit procedure, i. e. integrationof the Republic of Croatia in the European Union, theRepublic of Croatia will also have to implement the regulationsand rules of Simplified transit procedure valid in each of thethirty member states. In international railway traffic, the transportof goods is regulated by the Convention concerning InternationalCarriage by Rail- COT IF [2 1 and usage of the CIMwaybill (Contract for International Carriage of Goods by Rail.If the goods are transported in Simplified transit procedure, theformalities regarding the transport of goods performed by railcarriers using the international waybill CIM will be significantlysimplified and accelerated. In principle there are no delays dueto customs on the borders when crossing the EU borders andborders of the Convention member states, contributing greatlyto the acceleration of the transport of goods, reduction of waitingcosts and paperwork, as well as influence on the schedulereliability.

  9. A statistical-dynamical downscaling procedure for global climate simulations

    International Nuclear Information System (INIS)

    Frey-Buness, A.; Heimann, D.; Sausen, R.; Schumann, U.

    1994-01-01

    A statistical-dynamical downscaling procedure for global climate simulations is described. The procedure is based on the assumption that any regional climate is associated with a specific frequency distribution of classified large-scale weather situations. The frequency distributions are derived from multi-year episodes of low resolution global climate simulations. Highly resolved regional distributions of wind and temperature are calculated with a regional model for each class of large-scale weather situation. They are statistically evaluated by weighting them with the according climate-specific frequency. The procedure is exemplarily applied to the Alpine region for a global climate simulation of the present climate. (orig.)

  10. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  11. Assessment of the application of an ecotoxicological procedure to screen illicit toxic discharges in domestic septic tank sludge.

    Science.gov (United States)

    López-Gastey, J; Choucri, A; Robidoux, P Y; Sunahara, G I

    2000-06-01

    An innovative screening procedure has been developed to detect illicit toxic discharges in domestic septic tank sludge hauled to the Montreal Urban Community waste-water treatment plant. This new means of control is based on an integrative approach, using bioassays and chemical analyses. Conservative criteria are applied to detect abnormal toxicity with great reliability while avoiding false positive results. The complementary data obtained from toxicity tests and chemical analyses support the use of this efficient and easy-to-apply procedure. This study assesses the control procedure in which 231 samples were analyzed over a 30-month period. Data clearly demonstrate the deterrent power of an efficient control procedure combined with a public awareness campaign among the carriers. In the first 15 months of application, between January 1996 and March 1997, approximately 30% of the 123 samples analyzed showed abnormal toxicity. Between April 1997 and June 1998, that is, after a public hearing presentation of this procedure, this proportion dropped significantly to approximately 9% based on 108 analyzed samples. The results of a 30-month application of this new control procedure show the superior efficiency of the ecotoxicological approach compared with the previously used chemical control procedure. To be able to apply it effectively and, if necessary, to apply the appropriate coercive measures, ecotoxicological criteria should be included in regulatory guidelines.

  12. Improved Design of Crew Operation in Computerized Procedure System of APR1400

    Energy Technology Data Exchange (ETDEWEB)

    Seong, No Kyu; Jung, Yeon Sub; Sung, Chan Ho [KHNP, Daejeon (Korea, Republic of)

    2016-05-15

    The operators perform the paper-based procedures in analog-based conventional main control room (MCR) depending on only communications between operators except a procedure controller such as a Shift Supervisor (SS), however in digital-based MCR the operators can confirm the procedures simultaneously in own console when the procedure controller of computerized procedure (CP) opens the CP. The synchronization and a synchronization function between procedure controller and other operators has to be considered to support the function of crew operation. This paper suggests the improved design of crew operation in computerized procedure system of APR1400. This paper suggests the improved design of APR1400 CPS. These improvements can help operators perform the crew procedures more efficiently. And they reduce a burden of communication and misunderstanding of computerized procedures. These improvements can be applied to CPS after human factors engineering verification and validation.

  13. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  14. Use of performance curves in estimating number of procedures required to achieve proficiency in coronary angiography

    DEFF Research Database (Denmark)

    Räder, Sune B E W; Jørgensen, Erik; Bech, Bo

    2011-01-01

    .001 for all parameters. To approach the experts' level of DAP and contrast media use, trainees need 394 and 588 procedures, respectively. Performance curves showed large individual differences in the development of competence. Conclusion: On average, trainees needed 300 procedures to reach sufficient level...... needed for trainees to reach recommended reference levels was estimated as 226 and 353, for DAP and use of contrast media, respectively. After 300 procedures, trainees' procedure time, fluoroscopy time, DAP, and contrast media volume were significantly higher compared with experts' performance, P ...Background: Current guidelines in cardiology training programs recommend 100-300 coronary angiography procedures for certification. We aimed to assess the number of procedures needed to reach sufficient proficiency. Methods: Procedure time, fluoroscopy time, dose area product (DAP), and contrast...

  15. HYPNOSIS FOR ACUTE PROCEDURAL PAIN: A Critical Review

    Science.gov (United States)

    Kendrick, Cassie; Sliwinski, Jim; Yu, Yimin; Johnson, Aimee; Fisher, William; Kekecs, Zoltán; Elkins, Gary

    2015-01-01

    Clinical evidence for the effectiveness of hypnosis in the treatment of acute, procedural pain was critically evaluated based on reports from randomized controlled clinical trials (RCTs). Results from the 29 RCTs meeting inclusion criteria suggest that hypnosis decreases pain compared to standard care and attention control groups and that it is at least as effective as comparable adjunct psychological or behavioral therapies. In addition, applying hypnosis in multiple sessions prior to the day of the procedure produced the highest percentage of significant results. Hypnosis was most effective in minor surgical procedures. However, interpretations are limited by considerable risk of bias. Further studies using minimally effective control conditions and systematic control of intervention dose and timing are required to strengthen conclusions. PMID:26599994

  16. Hypnosis for Acute Procedural Pain: A Critical Review.

    Science.gov (United States)

    Kendrick, Cassie; Sliwinski, Jim; Yu, Yimin; Johnson, Aimee; Fisher, William; Kekecs, Zoltán; Elkins, Gary

    2016-01-01

    Clinical evidence for the effectiveness of hypnosis in the treatment of acute procedural pain was critically evaluated based on reports from randomized controlled clinical trials (RCTs). Results from the 29 RCTs meeting inclusion criteria suggest that hypnosis decreases pain compared to standard care and attention control groups and that it is at least as effective as comparable adjunct psychological or behavioral therapies. In addition, applying hypnosis in multiple sessions prior to the day of the procedure produced the highest percentage of significant results. Hypnosis was most effective in minor surgical procedures. However, interpretations are limited by considerable risk of bias. Further studies using minimally effective control conditions and systematic control of intervention dose and timing are required to strengthen conclusions.

  17. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  18. Exposure from diagnostic nuclear medicine procedures

    International Nuclear Information System (INIS)

    Iacob, O.; Diaconescu, C.; Isac, R.

    2002-01-01

    According to our last national study on population exposures from natural and artificial sources of ionizing radiation, 16% of overall annual collective effective dose represent the contribution of diagnostic medical exposures. Of this value, 92% is due to diagnostic X-ray examinations and only 8% arise from diagnostic nuclear medicine procedures. This small contribution to collective dose is mainly the result of their lower frequency compared to that of the X-ray examinations, doses delivered to patients being, on average, ten times higher. The purpose of this review was to reassess the population exposure from in vivo diagnostic nuclear medicine procedures and to evaluate the temporal trends of diagnostic usage of radiopharmaceuticals in Romania. The current survey is the third one conducted in the last decade. As in the previous ones (1990 and 1995), the contribution of the Radiation Hygiene Laboratories Network of the Ministry of Health and Family in collecting data from nuclear medicine departments in hospitals was very important

  19. Artificial intelligence applied to the automatic analysis of absorption spectra. Objective measurement of the fine structure constant

    Science.gov (United States)

    Bainbridge, Matthew B.; Webb, John K.

    2017-06-01

    A new and automated method is presented for the analysis of high-resolution absorption spectra. Three established numerical methods are unified into one `artificial intelligence' process: a genetic algorithm (Genetic Voigt Profile FIT, gvpfit); non-linear least-squares with parameter constraints (vpfit); and Bayesian model averaging (BMA). The method has broad application but here we apply it specifically to the problem of measuring the fine structure constant at high redshift. For this we need objectivity and reproducibility. gvpfit is also motivated by the importance of obtaining a large statistical sample of measurements of Δα/α. Interactive analyses are both time consuming and complex and automation makes obtaining a large sample feasible. In contrast to previous methodologies, we use BMA to derive results using a large set of models and show that this procedure is more robust than a human picking a single preferred model since BMA avoids the systematic uncertainties associated with model choice. Numerical simulations provide stringent tests of the whole process and we show using both real and simulated spectra that the unified automated fitting procedure out-performs a human interactive analysis. The method should be invaluable in the context of future instrumentation like ESPRESSO on the VLT and indeed future ELTs. We apply the method to the zabs = 1.8389 absorber towards the zem = 2.145 quasar J110325-264515. The derived constraint of Δα/α = 3.3 ± 2.9 × 10-6 is consistent with no variation and also consistent with the tentative spatial variation reported in Webb et al. and King et al.

  20. Development of an analytical procedure for plutonium in the concentration range of femtogram/gram and its application to environmental samples

    International Nuclear Information System (INIS)

    Schuettelkopf, H.

    1981-09-01

    To study the behaviour of plutonium in the environment and to measure plutonium in the vicinity of nuclear facilities, a quick, sensitive analytical method is required which can be applied to all sample materials found in the environment. For a sediment contaminated with plutonium a boiling out method using first HNO 3 /HF and subsequently HNO 3 /Al(NO 3 ) 3 was found to be successful. The leaching solution was then extracted by TOPO and the plutonium backextracted by ascorbic acid/HCl. Some different purification steps and finally electroplating using ammonium oxalate led to an optimum sample for α- spectroscopic determination of plutonium. An analytical method was worked out for plutonium which can be applied to all materials found in the environment. The sample size is 100 g but it might also be much greater. The average chemical yield is 70 and 80%. The detection limit for soil samples is 0.1 fCi/g and for plant samples 0.5 fCi/g. One technician can perform eight analyses per working day. The analytical procedure was applied to a large number of environmental samples and the results of these analyses are indicated. (orig./RB) [de

  1. Procedures for Selecting Items for Computerized Adaptive Tests.

    Science.gov (United States)

    Kingsbury, G. Gage; Zara, Anthony R.

    1989-01-01

    Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)

  2. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  3. Performance of interventional procedures in a day-hospital system

    International Nuclear Information System (INIS)

    Bae, Jae Ik; Park, Auh Whan; Cho, Hye Seon; Park, Eun Hee; Choi, Gap Suk; Lee, Seon Ju; Kim, Yong Woo; Juhn, Je Ryang

    2007-01-01

    We wanted to describe the practice and results of applying the day-hospital system in an interventional radiology clinic. From Oct. 2004 to Dec. 2005, the day-hospital system was applied to various interventional procedures with using a part of the recovery room of an angiography suite as a facility for hospital admission. The study included 91 cases in 73 patients. The source of the patient referral, the procedures, hospital courses and complications were analyzed and questionnaire surveys were conducted for the available 55 patients. Among the patients, 70% (n=64) were referred form other departments, 5% (n=5) from other hospitals, 5% (n=4) were new patients and 20% (n=18) were re-admissions. The procedures included gastrointestinal, biliary, urinary, hemodialysis related-and implantable port related interventions. 96% (n=87) of the patients were successfully discharged in a day and admission to the general ward was only 4% (n=4). Minor complications occurred after discharges in 3% (n=3). The questionnaire survey revealed that 96% (n=53) of the patients were satisfied with the service and they were not anxious after discharge. Most of common interventional procedures were safely done under the day-hospital system with the patients being highly satisfied. The day-hospital system can be a good tool for establishing admitting privileges for an interventional radiology clinic

  4. Applied Behavior Analysis: Current Myths in Public Education

    Science.gov (United States)

    Fielding, Cheryl; Lowdermilk, John; Lanier, Lauren L.; Fannin, Abigail G.; Schkade, Jennifer L.; Rose, Chad A.; Simpson, Cynthia G.

    2013-01-01

    The effective use of behavior management strategies and related policies continues to be a debated issue in public education. Despite overwhelming evidence espousing the benefits of the implementation of procedures derived from principles based on the science of applied behavior analysis (ABA), educators often indicate many common misconceptions…

  5. Applying coda envelope measurements to local and regional waveforms for stable estimates of magnitude, source spectra and energy

    International Nuclear Information System (INIS)

    Hofstetter, R.; Mayeda, K.; Rodgers, A.; Walter, W.

    1999-01-01

    Magnitude estimation forms an integral part in any seismic monitoring endeavor. For monitoring compliance of the Comprehensive Nuclear-Test-Ban Treaty, regional seismic discriminants are often functions of magnitude such as m(sub b):M(sub 0) high-to-low spectral ratios, and nuclear yield estimation. For small-to-moderate magnitude events that cannot be studied by a large regional or global network of stations, there is a need for stable magnitudes that can be obtained from as few as one station. To date, magnitudes based on coda envelopes are by far the most stable because of the coda's averaging properties. Unlike conventional magnitudes which utilize the direct phases such as P (P(sub n), P(sub g)) or S (S(sub n), L(sub g)), or M(sub g), a coda envelope magnitude is not as sensitive to the undesirable effects of source radiation pattern, 3-D path heterogeneity, and constructive/destructive interference near the recording site. The stability of the coda comes from a time-domain measurement made over a large portion of the seismogram thereby averaging over the scattered wavefield. This approach has been applied to earthquakes in the western United States where it was found that a single-station coda magnitude was approximately equivalent to an average over a 64 station network which used only the direct waves such as L(sub g) (Mayeda and Walter, JGR, 1996). In this paper we describe in detail our calibration procedure starting with a broadband recording, correlation with independent moment estimates, formation of narrowband envelopes, coda envelope fitting with synthetics, and finally the resultant moment-rate spectra. Our procedure accounts for all propagation, site, and S-to-coda transfer function effects. The resultant coda-derived moment-rate spectra are then used to estimate seismic moment (M(sub o)), narrowband magnitudes such as m(sub b) or M(sub L), and total seismic energy. For the eastern Mediterranean region a preliminary study was completed for

  6. In-Office Endoscopic Laryngeal Laser Procedures: A Patient Safety Initiative.

    Science.gov (United States)

    Anderson, Jennifer; Bensoussan, Yael; Townsley, Richard; Kell, Erika

    2018-05-01

    Objective To review complications of in-office endoscopic laryngeal laser procedures after implementation of standardized safety protocol. Methods A retrospective review was conducted of the first 2 years of in-office laser procedures at St Michaels Hospital after the introduction of a standardized safety protocol. The protocol included patient screening, procedure checklist with standardized reporting of processes, medications, and complications. Primary outcomes measured were complication rates of in-office laryngeal laser procedures. Secondary outcomes included hemodynamic changes, local anesthetic dose, laser settings, total laser/procedure time, and incidence of sedation. Results A total of 145 in-office KTP procedures performed on 65 patients were reviewed. In 98% of cases, the safety protocol was fully implemented. The overall complication rate was 4.8%. No major complications were encountered. Minor complications included vasovagal episodes and patient intolerance. The rate of patient intolerance resulting early termination of anticipated procedure was 13.1%. Total local anesthetic dose averaged 172.9 mg lidocaine per procedure. The mean amount of laser energy dispersed was 261.2 J, with mean total procedure time of 48.3 minutes. Sixteen percent of patients had preprocedure sedation. Vital signs were found to vary modestly. Systolic blood pressure was lower postprocedure in 13.8% and symptomatic in 4.1%. Discussion The review of our standardized safety protocol has revealed that in-office laser treatment for laryngeal pathology has extremely low complication rates with safe patient outcomes. Implications for Practice The trend of shifting procedures out of the operating room into the office/clinic setting requires new processes designed to promote patient safety.

  7. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  8. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  9. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  10. THE VALUE OF REMOVING DAILY OBSTACLES VIA EVERYDAY PROBLEM SOLVING THEORY: DEVELOPING AN APPLIED NOVEL PROCEDURE TO INCREASE SELF-EFFICACY FOR EXERCISE

    Directory of Open Access Journals (Sweden)

    Daniele eArtistico

    2013-01-01

    Full Text Available The objective of the study was to develop a novel procedure to increase self-efficacy for exercise. Gains in one’s ability to resolve day-to-day obstacles for entering an exercise routine were expected to cause an increase in self-efficacy for exercise. Fifty-five sedentary participants (did not exercise regularly for at least 4 months prior to the study who expressed an intention to exercise in the near future were selected for the study. Participants were randomly assigned to one of three conditions: 1 an Experimental Group in which they received a problem-solving training session to learn new strategies for solving day-to-day obstacles that interfere with exercise, 2 a Control Group with Problem Solving Training which received a problem solving training session focused on a typical day-to-day problem unrelated to exercise, or 3 a Control Group which did not receive any problem-solving training. Assessment of obstacles to exercise and perceived self-efficacy for exercise were conducted at baseline; perceived self-efficacy for exercise was reassessed post-intervention (one week later. No differences in perceived challenges posed by obstacles to exercise or self-efficacy for exercise were observed across groups at baseline. The Experimental Group reported greater improvement in self-efficacy for exercise compared to the Control Group with Training (p < 0.01 and the Control Group (p < 0.01. Results of this study suggest that a novel procedure that focuses on removing obstacles to intended planned fitness activities is effective in increasing self-efficacy to engage in exercise among sedentary adults. Implications of these findings for use in applied settings and treatment studies are discussed.

  11. The Value of Removing Daily Obstacles via Everyday Problem-Solving Theory: Developing an Applied Novel Procedure to Increase Self-Efficacy for Exercise.

    Science.gov (United States)

    Artistico, Daniele; Pinto, Angela Marinilli; Douek, Jill; Black, Justin; Pezzuti, Lina

    2013-01-01

    The objective of the study was to develop a novel procedure to increase self-efficacy for exercise. Gains in one's ability to resolve day-to-day obstacles for entering an exercise routine were expected to cause an increase in self-efficacy for exercise. Fifty-five sedentary participants (did not exercise regularly for at least 4 months prior to the study) who expressed an intention to exercise in the near future were selected for the study. Participants were randomly assigned to one of three conditions: (1) an Experimental Group in which they received a problem-solving training session to learn new strategies for solving day-to-day obstacles that interfere with exercise, (2) a Control Group with Problem-Solving Training which received a problem-solving training session focused on a typical day-to-day problem unrelated to exercise, or (3) a Control Group which did not receive any problem-solving training. Assessment of obstacles to exercise and perceived self-efficacy for exercise were conducted at baseline; perceived self-efficacy for exercise was reassessed post-intervention (1 week later). No differences in perceived challenges posed by obstacles to exercise or self-efficacy for exercise were observed across groups at baseline. The Experimental Group reported greater improvement in self-efficacy for exercise compared to the Control Group with Training and the Control Group. Results of this study suggest that a novel procedure that focuses on removing obstacles to intended planned fitness activities is effective in increasing self-efficacy to engage in exercise among sedentary adults. Implications of these findings for use in applied settings and treatment studies are discussed.

  12. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  13. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Kravtsenyuk Olga V

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.

  14. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  15. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  16. Residual stress effects in LMFBR fracture assessment procedures

    International Nuclear Information System (INIS)

    Hooton, D.G.

    1984-01-01

    Two post-yield fracture mechanics methods, which have been developed into fully detailed failure assessment procedures for ferritic structures, have been reviewed from the point of view of the manner in which as-welded residual stress effects are incorporated, and comparisons then made with finite element and theoretical models of centre-cracked plates containing residual/thermal stresses in the form of crack-driving force curves. Applying the procedures to austenitic structures, comparisons are made in terms of failure assessment curves and it is recommended that the preferred method for the prediction of critical crack sizes in LMFBR austenitic structures containing as-welded residual stresses is the CEGB-R6 procedure based on a flow stress defined at 3% strain in the parent plate. When the prediction of failure loads in such structures is required, it is suggested that the CEGB-R6 procedure be used with residual/thermal stresses factored to give a maximum total stress of flow stress magnitude

  17. Robot-assisted laparoscopic (RAL) procedures in general surgery.

    Science.gov (United States)

    Alimoglu, Orhan; Sagiroglu, Julide; Atak, Ibrahim; Kilic, Ali; Eren, Tunc; Caliskan, Mujgan; Bas, Gurhan

    2016-09-01

    Robotics was introduced in clinical practice more than two decades ago, and it has gained remarkable popularity for a wide variety of laparoscopic procedures. We report our results of robot-assisted laparoscopic surgery (RALS) in the most commonly applied general surgical procedures. Ninety seven patients underwent RALS from 2009 to 2012. Indications for RALS were cholelithiasis, gastric carcinoma, splenic tumors, colorectal carcinoma, benign colorectal diseases, non-toxic nodular goiter and incisional hernia. Records of patients were analyzed for demographic features, intraoperative and postoperative complications and conversion to open surgery. Forty six female and 51 male patients were operated and mean age was 58,4 (range: 25-88). Ninety three out of 97 procedures (96%) were completed robotically, 4 were converted to open surgery and there were 15 postoperative complications. There was no mortality. Wide variety of procedures of general surgery can be managed safely and effectively by RALS. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Application of structured illumination to gas phase thermometry using thermographic phosphor particles: a study for averaged imaging

    Science.gov (United States)

    Zentgraf, Florian; Stephan, Michael; Berrocal, Edouard; Albert, Barbara; Böhm, Benjamin; Dreizler, Andreas

    2017-07-01

    Structured laser illumination planar imaging (SLIPI) is combined with gas phase thermometry measurements using thermographic phosphor (TGP) particles. The technique is applied to a heated jet surrounded by a coflow which is operated at ambient temperature. The respective air flows are seeded with a powder of BaMgAl10O17:Eu2+ (BAM) which is used as temperature-sensitive gas phase tracer. Upon pulsed excitation in the ultraviolet spectral range, the temperature is extracted based on the two-color ratio method combined with SLIPI. The main advantage of applying the SLIPI approach to phosphor thermometry is the reduction of particle-to-particle multiple light scattering and diffuse wall reflections, yielding a more robust calibration procedure as well as improving the measurement accuracy, precision, and sensitivity. For demonstration, this paper focuses on sample-averaged measurements of temperature fields in a jet-in-coflow configuration. Using the conventional approach, which in contrast to SLIPI is based on imaging with an unmodulated laser light sheet, we show that for the present setup typically 40% of the recorded signal is affected by the contribution of multiply scattered photons. At locations close to walls even up to 75% of the apparent signal is due to diffuse reflection and wall luminescence of BAM sticking at the surface. Those contributions lead to erroneous temperature fields. Using SLIPI, an unbiased two-color ratio field is recovered allowing for two-dimensional mean temperature reconstructions which exhibit a more realistic physical behavior. This is in contrast to results deduced by the conventional approach. Furthermore, using the SLIPI approach it is shown that the temperature sensitivity is enhanced by a factor of up to 2 at 270 °C. Finally, an outlook towards instantaneous SLIPI phosphorescence thermometry is provided.

  19. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  20. The European Procedure on Reduced Value Claims

    Directory of Open Access Journals (Sweden)

    Alexandrina Zaharia

    2009-06-01

    Full Text Available Maintaining and developing the area of freedom, security and justice is a major objective of theEuropean Community, which guarantees the free movement of persons. As a result of litigations regardingthe applications with a reduced value arising among physical or legal persons, it was felt the need for acommunity legislation that would guarantee identical conditions, both for creditors and debtors throughoutthe entire European Union territory. The European procedure regarding the debts recovery of reducedvalue facilitates the access to justice and it is characterized by simplifying and expediting the settling of thetransboundary litigations, reducing costs, the recognition and execution of the court order in a MemberState given in another Member State. This procedure is available to litigants as an alternative to theprocedures provided by the laws of Member States. The Regulation (EC no. 861/2007 establishing aEuropean procedure regarding the applications with reduced value applies in civil and commercial matters inthe transboundary cases, regardless the nature of the court when the application value, without taking intoaccount the interest, expenditures and other costs, does not exceed 2000 Euro at the time of receiving theapplication form by the competent court. This procedure does not apply to revenue, customs oradministrative matters or in regard to state responsibility for acts or omissions in exercising the publicauthority, and other matters specifically referred to in the Regulation. A cause is transboundary in naturewhen one of the parties has its habitual residence in a Member State, other than the one where the courtreceives such application. The proper procedure of application resolution for the recovery of debts withreduced value is governed by the rules of procedural law of the Member State in which the proceedings areconducted, and the execution of court of law is made by state legislation in which it takes place. The

  1. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  2. 9 CFR 147.14 - Procedures to determine status and effectiveness of sanitation monitored program.

    Science.gov (United States)

    2010-01-01

    ... status and effectiveness of sanitation monitored program. The following monitoring procedures 10 may be applied at the discretion of the Official State Agency: 10 Laboratory procedures for monitoring operations... egg conalbumin; and (ii) Tetrathionate selective enrichment broths, competitor-controlling plating...

  3. Revised guideline for the approval procedure of package designs in Germany

    International Nuclear Information System (INIS)

    Nitsche, F.; Roedel, R.

    2004-01-01

    The IAEA Regulations for the Safe Transport of Radioactive Material, TS-R-1 are applied in Germany through the implementation of the Dangerous Goods Transport Regulations for class 7 of the International Modal Organisations (ADR, RID, IMDG-Code, ICAO-TI). Based on this the approval procedures for packages designs applied in Germany are in compliance with the provisions of TS-R-1. The Guideline R 003 issued by the Ministry of Transport, Building and Housing (BMVBW) in 1991 is the basis for the package design approval procedures in Germany. This Guideline has been reviewed and revised to reflect latest developments in the regulations as well as in the regulatory practice. In particular it has been extended to the approval procedures of Type C packages, packages subject to transitional arrangements, special form and low dispersible radioactive material and provides more detailed information to the applicant about the requested documentation. Publication of this revised guideline has been delayed but it is expected to take place in October 2004. The paper gives an overview about the main parts and provisions of this revised Guideline R 003 with the focus on package design approval procedures

  4. Amplitude differences least squares method applied to temporal cardiac beat alignment

    International Nuclear Information System (INIS)

    Correa, R O; Laciar, E; Valentinuzzi, M E

    2007-01-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative

  5. Response surface methodology applied to the study of the microwave-assisted synthesis of quaternized chitosan.

    Science.gov (United States)

    dos Santos, Danilo Martins; Bukzem, Andrea de Lacerda; Campana-Filho, Sérgio Paulo

    2016-03-15

    A quaternized derivative of chitosan, namely N-(2-hydroxy)-propyl-3-trimethylammonium chitosan chloride (QCh), was synthesized by reacting glycidyltrimethylammonium chloride (GTMAC) and chitosan (Ch) in acid medium under microwave irradiation. Full-factorial 2(3) central composite design and response surface methodology (RSM) were applied to evaluate the effects of molar ratio GTMAC/Ch, reaction time and temperature on the reaction yield, average degree of quaternization (DQ) and intrinsic viscosity ([η]) of QCh. The molar ratio GTMAC/Ch was the most important factor affecting the response variables and RSM results showed that highly substituted QCh (DQ = 71.1%) was produced at high yield (164%) when the reaction was carried out for 30min. at 85°C by using molar ratio GTMAC/Ch 6/1. Results showed that microwave-assisted synthesis is much faster (≤30min.) as compared to conventional reaction procedures (>4h) carried out in similar conditions except for the use of microwave irradiation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  7. Uranium hexafluoride: handling procedures and container criteria

    International Nuclear Information System (INIS)

    1977-04-01

    The U.S. Energy Research and Development Administration's (ERDA) procedures for packaging, measuring, and transferring uranium hexafluoride (UF 6 ) have been undergoing continual review and revision for several years to keep them in phase with developing agreements for the supply of enriched uranium. This report, first issued in 1966, was reissued in 1967 to make editorial changes and to provide for minor revisions in procedural information. In 1968 and 1972, Revisions 2 and 3, respectively, were issued as part of the continuing effort to present updated information. This document, Revision 4, includes primarily revisions to UF 6 cylinders, valves, and methods of use. This revision supersedes all previous issues of this report. The procedures will normally apply in all transactions involving receipt or shipment of UF 6 by ERDA, unless stipulated otherwise by contracts or agreements with ERDA or by notices published in the Federal Register

  8. An interactive simulation-based education system for BWR emergency, procedure guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Tanikawa, Naoshi; Shida, Touichi [Hitachi Ltd (Japan). Hitachi Works; Ujita, Hiroshi; Yokota, Takeshi; Kato, Kanji [Hitachi Ltd, (Japan). Energy Research Lab.

    1994-12-31

    When applying EPGs (Emergency Procedure Guidelines), an operator decides the operational procedure by predicting the change of parameters from the plant status, because EPGs are described in a symptom style for emergency conditions. Technical knowledge of the plant behavior and its operation are necessary for operator to understand the EPGs. An interactive simulation-based education system, EPG-ICAI (Intelligent Computer Assisted Instruction), has been developed for BWR plant operators to acquire the knowledge of EPGs. EPG-ICAI is designed to realize an effective education by the step-by-step study by using an interactive real time simulator and an individual education by applying an intelligent tutoring function. (orig.) (2 refs., 7 figs., 1 tab.).

  9. An interactive simulation-based education system for BWR emergency, procedure guidelines

    International Nuclear Information System (INIS)

    Tanikawa, Naoshi; Shida, Touichi; Ujita, Hiroshi; Yokota, Takeshi; Kato, Kanji

    1994-01-01

    When applying EPGs (Emergency Procedure Guidelines), an operator decides the operational procedure by predicting the change of parameters from the plant status, because EPGs are described in a symptom style for emergency conditions. Technical knowledge of the plant behavior and its operation are necessary for operator to understand the EPGs. An interactive simulation-based education system, EPG-ICAI (Intelligent Computer Assisted Instruction), has been developed for BWR plant operators to acquire the knowledge of EPGs. EPG-ICAI is designed to realize an effective education by the step-by-step study by using an interactive real time simulator and an individual education by applying an intelligent tutoring function. (orig.) (2 refs., 7 figs., 1 tab.)

  10. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  11. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  12. Ruthenium oxide/carbon composites with microporous or mesoporous carbon as support and prepared by two procedures. A comparative study as supercapacitor electrodes

    International Nuclear Information System (INIS)

    Pico, F.; Morales, E.; Fernandez, J.A.; Centeno, T.A.; Ibanez, J.; Rojas, R.M.; Amarilla, J.M.; Rojo, J.M.

    2009-01-01

    Composites are prepared by deposition of nanoparticles of RuO 2 .xH 2 O (1-4 nm) on two carbons: microporous carbon (1.3 nm of average micropore size) and mesoporous carbon (11 nm of average mesopore size). Two-preparation procedures are used: (i) procedure A consisting of repetitive impregnations of the carbons with RuCl 3 .0.5H 2 O solutions, and (ii) procedure B based on impregnation of the carbons with Ru(acac) 3 vapour. The procedure B leads to supported RuO 2 .xH 2 O particles that appear more crystalline than those obtained by the procedure A. Specific capacitance and specific surface area of the composites are discussed as functions of the RuO 2 content, and different dependences for the composites derived from the two carbons are found. Mesoporous carbon is better support than microporous carbon. Procedure A leads to supported RuO 2 .xH 2 O particles with higher specific capacitance than the particles deposited by procedure B

  13. Ruthenium oxide/carbon composites with microporous or mesoporous carbon as support and prepared by two procedures. A comparative study as supercapacitor electrodes

    Energy Technology Data Exchange (ETDEWEB)

    Pico, F. [Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Cientificas (CSIC), Sor Juana Ines de la Cruz 3, Cantoblanco, E-28049-Madrid (Spain); Morales, E. [Instituto de Ciencia y Tecnologia de Polimeros (ICTP), CSIC, Juan de la Cierva 3, E-28006-Madrid (Spain); Fernandez, J.A.; Centeno, T.A. [Instituto Nacional del Carbon (INCAR), CSIC, Francisco Pintado Fe 26, E-33011-Oviedo (Spain); Ibanez, J. [Centro Nacional de Investigaciones Metalurgicas (CENIM), CSIC, Avda. Gregorio del Amo 8, E-28040-Madrid (Spain); Rojas, R.M.; Amarilla, J.M. [Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Cientificas (CSIC), Sor Juana Ines de la Cruz 3, Cantoblanco, E-28049-Madrid (Spain); Rojo, J.M. [Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Cientificas (CSIC), Sor Juana Ines de la Cruz 3, Cantoblanco, E-28049-Madrid (Spain)], E-mail: jmrojo@icmm.csic.es

    2009-03-01

    Composites are prepared by deposition of nanoparticles of RuO{sub 2}.xH{sub 2}O (1-4 nm) on two carbons: microporous carbon (1.3 nm of average micropore size) and mesoporous carbon (11 nm of average mesopore size). Two-preparation procedures are used: (i) procedure A consisting of repetitive impregnations of the carbons with RuCl{sub 3}.0.5H{sub 2}O solutions, and (ii) procedure B based on impregnation of the carbons with Ru(acac){sub 3} vapour. The procedure B leads to supported RuO{sub 2}.xH{sub 2}O particles that appear more crystalline than those obtained by the procedure A. Specific capacitance and specific surface area of the composites are discussed as functions of the RuO{sub 2} content, and different dependences for the composites derived from the two carbons are found. Mesoporous carbon is better support than microporous carbon. Procedure A leads to supported RuO{sub 2}.xH{sub 2}O particles with higher specific capacitance than the particles deposited by procedure B.

  14. 40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false What transient duty cycles apply for... Procedures § 1048.510 What transient duty cycles apply for laboratory testing? (a) Starting with the 2007 model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in...

  15. A proposal of group decision making procedure for supporting social consensus making

    International Nuclear Information System (INIS)

    Shimizu, Yoshiaki

    1996-01-01

    Being interested in supporting social consensus making, in this paper, we have proposed a group decision making procedure through conflict resolution in the following situation: each group has a different privilege with decision making; the final goal should be evaluated by a few qualitative sub-goals besides quantitative ones. For this purpose, we have developed a step-wise procedure that has been popularly adapted when encountered with complicated and large-scale problem-solving. As well as at the value system design phase, we applied the analytic hierarchy process, AHP to decide weights standing for the privilege at the decision making phase. Then, after rearranging the hierarchy of the sub-goals depending on the nature, we have provided an iterative procedure to derive a final solution from a discrete optimization problem. To reduce the difficulties of multi-objective decision making thereat, we took a scoring method for total evaluation and applied the genetic algorithm as a solution method. Through numerical experiments applied to a planning problem of the radioactive waste management system, we have shown numerically the proposed approach is very promising for social consensus making. (author)

  16. Weld Design, Testing, and Assessment Procedures for High Strength Pipelines

    Science.gov (United States)

    2011-12-20

    Long-distance high-strength pipelines are increasingly being constructed for the efficient transportation of energy products. While the high-strength linepipe steels and high productivity welding processes are being applied, the procedures employed f...

  17. Estimation of equivalent dose on the ends of hemodynamic physicians during neurological procedures

    International Nuclear Information System (INIS)

    Squair, Peterson L.; Souza, Luiz C. de; Oliveira, Paulo Marcio C. de

    2005-01-01

    The estimation of doses in the hands of physicists during hemodynamic procedures is important to verify the application of radiation protection related to the optimization and limit of dose, principles required by the Portaria 453/98 of Ministry of Health/ANVISA, Brazil. It was checked the levels of exposure of the hands of doctors during the use of the equipment in hemodynamic neurological procedures through dosimetric rings with thermoluminescent dosemeters detectors of LiF: Mg, Ti (TLD-100), calibrated in personal Dose equivalent HP (0.07). The average equivalent dose in the end obtained was 41.12. μSv per scan with an expanded uncertainty of 20% for k = 2. This value is relative to the hemodynamic Neurology procedure using radiological protection procedures accessible to minimize the dose

  18. August Dvorak (1894-1975): Early expressions of applied behavior analysis and precision teaching

    Science.gov (United States)

    Joyce, Bonnie; Moxley, Roy A.

    1988-01-01

    August Dvorak is best known for his development of the Dvorak keyboard. However, Dvorak also adapted and applied many behavioral and scientific management techniques to the field of education. Taken collectively, these techniques are representative of many of the procedures currently used in applied behavior analysis, in general, and especially in precision teaching. The failure to consider Dvorak's instructional methods may explain some of the discrepant findings in studies which compare the efficiency of the Dvorak to the standard keyboard. This article presents a brief background on the development of the standard (QWERTY) and Dvorak keyboards, describes parallels between Dvorak's teaching procedures and those used in precision teaching, reviews some of the comparative research on the Dvorak keyboard, and suggests some implications for further research in applying the principles of behavior analysis. PMID:22477993

  19. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  20. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [DESY Hamburg (Germany). Theory Group; Gasperini, M. [Bari Univ. (Italy). Dipt. di Fisica; Istituto Nazionale di Fisica Nucleare, Bari (Italy); Marozzi, G. [College de France, 75 - Paris (France); Geneve Univ. (Switzerland). Dept. de Physique Theorique and CAP; Nugier, F. [Ecole Normale Superieure CNRS, Paris (France). Laboratoire de Physique Theorique; Veneziano, G. [College de France, 75 - Paris (France); CERN, Geneva (Switzerland). Physics Dept.; New York Univ., NY (United States). Dept. of Physics

    2013-03-15

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup -3} - 10{sup -5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  1. Notes on Well-Posed, Ensemble Averaged Conservation Equations for Multiphase, Multi-Component, and Multi-Material Flows

    International Nuclear Information System (INIS)

    Ray A. Berry

    2005-01-01

    At the INL researchers and engineers routinely encounter multiphase, multi-component, and/or multi-material flows. Some examples include: Reactor coolant flows Molten corium flows Dynamic compaction of metal powders Spray forming and thermal plasma spraying Plasma quench reactor Subsurface flows, particularly in the vadose zone Internal flows within fuel cells Black liquor atomization and combustion Wheat-chaff classification in combine harvesters Generation IV pebble bed, high temperature gas reactor The complexity of these flows dictates that they be examined in an averaged sense. Typically one would begin with known (or at least postulated) microscopic flow relations that hold on the ''small'' scale. These include continuum level conservation of mass, balance of species mass and momentum, conservation of energy, and a statement of the second law of thermodynamics often in the form of an entropy inequality (such as the Clausius-Duhem inequality). The averaged or macroscopic conservation equations and entropy inequalities are then obtained from the microscopic equations through suitable averaging procedures. At this stage a stronger form of the second law may also be postulated for the mixture of phases or materials. To render the evolutionary material flow balance system unique, constitutive equations and phase or material interaction relations are introduced from experimental observation, or by postulation, through strict enforcement of the constraints or restrictions resulting from the averaged entropy inequalities. These averaged equations form the governing equation system for the dynamic evolution of these mixture flows. Most commonly, the averaging technique utilized is either volume or time averaging or a combination of the two. The flow restrictions required for volume and time averaging to be valid can be severe, and violations of these restrictions are often found. A more general, less restrictive (and far less commonly used) type of averaging known as

  2. Development of an ICF-based eligibility procedure for education in Switzerland.

    Science.gov (United States)

    Hollenweger, Judith

    2011-05-31

    Starting in January 2011, Switzerland will implement a multidimensional, context-sensitive procedure to establish eligibility in education systems. This paper provides a brief overview of the different eligibility-related practices with a special focus on children with disabilities. The paper then outlines the philosophical and conceptual framework of the eligibility procedure based on the International Classification of Functioning, Disability and Health, and the UN Convention on the Rights of Persons with Disability. The different components and methodology applied to organise information in the process towards establishing eligibility are also presented. Finally, some observations are made regarding transparent and just applications of the eligibility procedure, and the implementation of this new eligibility procedure.

  3. Development of a leak-before break procedure for pressurised components

    International Nuclear Information System (INIS)

    Langston, D.B.

    1989-05-01

    For pressurised components there is an increasing interest in the use of leak-before-break arguments to show that defects will behave in a ''failsafe'' manner by growing in such a way as to cause a detectable leak before a disruptive failure of the pressure boundary can occur. The CEGB operates a wide variety of plant and has recognised the need for a flexible leak-before-break procedure which can be applied in a variety of different situations including part-through defects. This paper describes the development of such a procedure and discusses some of the key aspects of the leak-before-break procedure. (author)

  4. Gap nonunion of tibia treated by Huntington′s procedure

    Directory of Open Access Journals (Sweden)

    Zile S Kundu

    2012-01-01

    Full Text Available Background: Gap nonunion that may occur following trauma or infection is a challenging problem to treat. The patients with intact or united fibula, preserved sensation in the sole, and adequate vascularity, were managed by tibialization (medialization of the fibula (Huntington′s procedure, to restore continuity of the tibia. The goal of this retrospective analysis study is to report the mid-term results following the Huntington′s procedure. Materials and Methods: 22 patients (20 males and two females age 16-34 years with segmental tibial loss more than 6 cm were operated for tibialization of fibula. The procedure was two-staged in seven and single-staged in the rest 15 patients, where the lateral aspect of the leg was relatively supple. In the two-staged procedure, the distal tibiofibular synostosis was performed six-to-eight weeks after the proximal procedure. Weightbearing (protected was started in a long leg cast after six-to-eight weeks of the second stage and continued for six-to-eight months, followed by the use of a brace. Results: The fibula started showing signs of hypertrophy within the first year after the procedure and it was more than double in breath after the four-year period. Full and unprotected weightbearing on the operated leg was achieved at an average time of 16 months. At the final followup, ten patients were very satisfied, seven satisfied, and five fairly satisfied. One patient had persistent nonunion at the proximal synostotic site even after bone grafting and secondary fixation. Conclusion: Huntington′s procedure is a safe and simple salvage procedure and remains an excellent option for treating difficult infected nonunion of the tibia in the selected indications.

  5. Medication-related risks of CT-procedures in neonates and young infants

    International Nuclear Information System (INIS)

    Abel, M.

    1985-01-01

    In very young pediatric patients CT-investigations require sedative-hypnotic drug treatment to ensure complete immobilisation during scanning. The case report of a neonate with respiratory arrest after a repeated CT-premedication underlines the high risk of these procedures, especially in patients with central nervous system disorders. We compared organisational requirements, risks and complication rates of 146 oral and intramuscular promazine medications for CT-scanning of the head in 146 infants and neonates (93,8% adequate sedation response) to those of reported alternative methods. Oral promazine proved to be a very effective and safe medication (average dosage in 57 patients without complications: 5,2 mg/kg body weight/90 minutes before CT-scanning; 96% successful sedation procedures) in comparison to 89 patients with i.m. promazine (average dosage: 2,3 mg/kg body weight/45 min before CT with 92% adequate sedations but a complication rate of 7,9%). For neuropediatric examinations of outpatients fast recovery and EEG-compatibility are further important advantages of oral promazine CT-medication. (orig.) [de

  6. Procedural virtual reality simulation in minimally invasive surgery.

    Science.gov (United States)

    Våpenstad, Cecilie; Buzink, Sonja N

    2013-02-01

    Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.

  7. All-Arthroscopic Revision Eden-Hybinette Procedure for Failed Instability Surgery: Technique and Preliminary Results.

    Science.gov (United States)

    Giannakos, Antonios; Vezeridis, Peter S; Schwartz, Daniel G; Jany, Richard; Lafosse, Laurent

    2017-01-01

    To describe the technique of an all-arthroscopic Eden-Hybinette procedure in the revision setting for treatment of a failed instability procedure, particularly after failed Latarjet, as well as to present preliminary results of this technique. Between 2007 and 2011, 18 shoulders with persistent instability after failed instability surgery were treated with an arthroscopic Eden-Hybinette technique using an autologous bicortical iliac crest bone graft. Of 18 patients, 12 (9 men, 3 women) were available for follow-up. The average follow-up was 28.8 months (range, 15 to 60 months). A Latarjet procedure was performed as an index surgery in 10 patients (83%). Two patients (17%) had a prior arthroscopic Bankart repair. Eight patients (67%) obtained a good or excellent result, whereas 4 patients (33%) reported a fair or poor result. Seven patients (58%) returned to sport activities. A positive apprehension test persisted in 5 patients (42%), including 2 patients (17%) with recurrent subluxations. The Rowe score increased from 30.00 to 78.33 points (P Instability Index score showed a good result of 28.71% (603 points). The average anterior flexion was 176° (range, 150° to 180°), and the average external rotation was 66° (range, 0° to 90°). Two patients (16.67%) showed a progression of glenohumeral osteoarthritic changes, with each patient increasing by one stage in the Samilson-Prieto classification. All 4 patients (33%) with a fair or poor result had a nonunion identified on postoperative computed tomography scan. An all-arthroscopic Eden-Hybinette procedure in the revision setting for failed instability surgery, although technically demanding, is a safe, effective, and reproducible technique. Although the learning curve is considerable, this procedure offers all the advantages of arthroscopic surgery and allows reconstruction of glenoid defects and restoration of shoulder stability in this challenging patient population. In our hands, this procedure yields good

  8. Radiation dose electrophysiology procedures

    International Nuclear Information System (INIS)

    Hernandez-Armas, J.; Rodriguez, A.; Catalan, A.; Hernandez Armas, O.; Luque Japon, L.; Moral, S.; Barroso, L.; Rfuez-Hdez, R.

    2006-01-01

    The aim of this paper has been to measure and analyse some of the parameters which are directly related with the doses given to patients in two electrophysiology procedures: diagnosis and ablation with radiofrequency. 16 patients were considered in this study. 13 them had an ablation with radiofrequency at the Unit of Electrophysiology at the University Hospital of the Canaries, La Laguna., Tenerife. The results of skin doses, in the ablation cases, were higher than 2 Gy (threshold of some deterministic effects). The average value was 1.1 Gy. The personal doses, measured under the lead apron, for physician and nurses were 4 and 3 micro Sievert. These results emphasised the necessity of radiation protection measures in order to reduce, ad much as possible, the doses to patients. (Author)

  9. Conformational energy calculations on polypeptides and proteins: use of a statistical mechanical procedure for evaluating structure and properties.

    Science.gov (United States)

    Scheraga, H A; Paine, G H

    1986-01-01

    We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.

  10. Dosimetry in medical specialist in procedures of interventionist radiology

    International Nuclear Information System (INIS)

    Gaona, E.; Vazquez V, J. A.; Rivera M, T.; Izeta G, A. C.; Azorin V, J. C.; Arreola, M.

    2014-08-01

    In this work are presented the experimental results of determining the dose in different body parts, measured by thermoluminescent dosimeters, to medical specialists of implantation procedures of definitive cardiac pacemaker. The medical personnel in ten intervention procedures were controlled according to the procedure type, pathological indication, fluoroscopy time and machine generating estimates of the patient doses. The doses to the extremities of the cardiologist were measured by means of thermoluminescent dosimeters (TLD). The domains of first level in the hand are in the index finger of the left and the right hand. The medium doses of the skin in the eyes, a report of the dose received during each type of intervention procedure in the glandular thyroid and fingers of the cardiologists is made. The results represent the integrated dose to the cardiologist, received during the implantation procedures of definitive cardiac pacemaker in the same patient. By a half time of detection of 70 minutes for patient, the half dose of the skin, received for the right and left hand, ascended to 1,4 mSv, under the glove. In conclusion, the dose average for the dosimeters of the thyroid gland and forehead was varied from 0,41 up to 1,14 mSv for study. The exposure to the X-rays is a topic to consider, more important every time on the development of systematic procedures little invasive, including the angiography, catheter, worker and patient. (Author)

  11. 50 CFR 23.32 - How do I apply for a U.S. CITES document?

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false How do I apply for a U.S. CITES document... FAUNA AND FLORA (CITES) Application Procedures, Criteria, and Conditions § 23.32 How do I apply for a U.S. CITES document? (a) To apply for a U.S. CITES document, you must complete a standard application...

  12. Regulations and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Lydia J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2011-07-25

    The purpose of the Regulations and Procedures Manual (RPM) is to provide LBNL personnel with a reference to University and Lawrence Berkeley National Laboratory (LBNL or Laboratory) policies and regulations by outlining normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory organizations. Much of the information in this manual has been condensed from detail provided in LBNL procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. RPM sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the LBNL organization responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which organization is responsible for a policy, please contact Requirements Manager Lydia Young or the RPM Editor.

  13. Regulations and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Lydia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2010-09-30

    The purpose of the Regulations and Procedures Manual (RPM) is to provide Laboratory personnel with a reference to University and Lawrence Berkeley National Laboratory policies and regulations by outlining the normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory departments. Much of the information in this manual has been condensed from detail provided in Laboratory procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. The sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the department responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which department should be called, please contact the Associate Laboratory Director of Operations.

  14. An Analysis of Methods Section of Research Reports in Applied Linguistics

    OpenAIRE

    Patrícia Marcuzzo

    2011-01-01

    This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about th...

  15. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  16. 78 FR 19981 - Federal Open Market Committee; Rules of Procedure

    Science.gov (United States)

    2013-04-03

    .... FOR FURTHER INFORMATION CONTACT: Alicia S. Foster, Senior Special Counsel (202-452-5289), Legal... provisions of the Administrative Procedure Act do not apply to the amended rule. See 5 U.S.C. 553(b) and (d...

  17. Yahtzee: an anonymized group level matching procedure.

    Directory of Open Access Journals (Sweden)

    Jason J Jones

    Full Text Available Researchers often face the problem of needing to protect the privacy of subjects while also needing to integrate data that contains personal information from diverse data sources. The advent of computational social science and the enormous amount of data about people that is being collected makes protecting the privacy of research subjects ever more important. However, strict privacy procedures can hinder the process of joining diverse sources of data that contain information about specific individual behaviors. In this paper we present a procedure to keep information about specific individuals from being "leaked" or shared in either direction between two sources of data without need of a trusted third party. To achieve this goal, we randomly assign individuals to anonymous groups before combining the anonymized information between the two sources of data. We refer to this method as the Yahtzee procedure, and show that it performs as predicted by theoretical analysis when we apply it to data from Facebook and public voter records.

  18. Yahtzee: an anonymized group level matching procedure.

    Science.gov (United States)

    Jones, Jason J; Bond, Robert M; Fariss, Christopher J; Settle, Jaime E; Kramer, Adam D I; Marlow, Cameron; Fowler, James H

    2013-01-01

    Researchers often face the problem of needing to protect the privacy of subjects while also needing to integrate data that contains personal information from diverse data sources. The advent of computational social science and the enormous amount of data about people that is being collected makes protecting the privacy of research subjects ever more important. However, strict privacy procedures can hinder the process of joining diverse sources of data that contain information about specific individual behaviors. In this paper we present a procedure to keep information about specific individuals from being "leaked" or shared in either direction between two sources of data without need of a trusted third party. To achieve this goal, we randomly assign individuals to anonymous groups before combining the anonymized information between the two sources of data. We refer to this method as the Yahtzee procedure, and show that it performs as predicted by theoretical analysis when we apply it to data from Facebook and public voter records.

  19. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  20. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan

    2013-01-01

    The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results...

  1. Statistical aspects of autoregressive-moving average models in the assessment of radon mitigation

    International Nuclear Information System (INIS)

    Dunn, J.E.; Henschel, D.B.

    1989-01-01

    Radon values, as reflected by hourly scintillation counts, seem dominated by major, pseudo-periodic, random fluctuations. This methodological paper reports a moderate degree of success in modeling these data using relatively simple autoregressive-moving average models to assess the effectiveness of radon mitigation techniques in existing housing. While accounting for the natural correlation of successive observations, familiar summary statistics such as steady state estimates, standard errors, confidence limits, and tests of hypothesis are produced. The Box-Jenkins approach is used throughout. In particular, intervention analysis provides an objective means of assessing the effectiveness of an active mitigation measure, such as a fan off/on cycle. Occasionally, failure to declare a significant intervention has suggested a means of remedial action in the data collection procedure

  2. Determination of averaged axisymmetric flow surfaces according to results obtained by numerical simulation of flow in turbomachinery

    Directory of Open Access Journals (Sweden)

    Bogdanović-Jovanović Jasmina B.

    2012-01-01

    Full Text Available In the increasing need for energy saving worldwide, the designing process of turbomachinery, as an essential part of thermal and hydroenergy systems, goes in the direction of enlarging efficiency. Therefore, the optimization of turbomachinery designing strongly affects the energy efficiency of the entire system. In the designing process of turbomachinery blade profiling, the model of axisymmetric fluid flows is commonly used in technical practice, even though this model suits only the profile cascades with infinite number of infinitely thin blades. The actual flow in turbomachinery profile cascades is not axisymmetric, and it can be fictively derived into the axisymmetric flow by averaging flow parameters in the blade passages according to the circular coordinate. Using numerical simulations of flow in turbomachinery runners, its operating parameters can be preliminarily determined. Furthermore, using the numerically obtained flow parameters in the blade passages, averaged axisymmetric flow surfaces in blade profile cascades can also be determined. The method of determination of averaged flow parameters and averaged meridian streamlines is presented in this paper, using the integral continuity equation for averaged flow parameters. With thus obtained results, every designer can be able to compare the obtained averaged flow surfaces with axisymmetric flow surfaces, as well as the specific work of elementary stages, which are used in the procedure of blade designing. Numerical simulations of flow in an exemplary axial flow pump, used as a part of the thermal power plant cooling system, were performed using Ansys CFX. [Projekat Ministarstva nauke Republike Srbije, br. TR33040: Revitalization of existing and designing new micro and mini hydropower plants (from 100 kW to 1000 kW in the territory of South and Southeast Serbia

  3. Improvements on computerized procedure system of advanced power reactor 1400 MWe

    International Nuclear Information System (INIS)

    Seong, Nokyu; Jung, Yeonsub; Sung, Chanho; Kang, Sungkon

    2017-01-01

    Plant procedures are instructions to help operator in monitoring, decision making, and controlling Nuclear Power Plants (NPPs). While plant procedures conventionally have been paper-based, computerized-based procedures are being implemented to reduce the drawbacks of paper-based procedures in many nuclear power plants. The Computerized Procedure System (CPS) designed by Korea Hydro and Nuclear Power Central Research Institute (KHNP CRI) is one of the human-system interfaces (HSIs) in digitalized Main Control Room (MCR) of APR1400 (Advanced Power Reactor 1400 MWe). Currently, CPS is being applied to constructing nuclear power plants of Korea and Barakah NPP 1, 2, 3 and 4 units of United Arab Emirates. The CPS has many advantages to perform the procedure in fully digitalized MCR. First, CPS provides the procedure flow with logic diagram to operators. The operator easily can be aware of the procedure flow from a previous instruction to the next instruction and also can find out the relation between parent instruction and child instructions such as AND, OR and SEQUENCE logics. Second, CPS has three logic-based functions such as procedure entry condition monitoring logic, continuously applied step (CAS) re-execution monitoring logic and auto evaluation logic on instructions. E.g. CPS provides the standard post trip actions procedure open popup message when the reactor trips by calculating the entry condition logic that procedure writer had made in the writing process. Third, CPS can directly display the task information related to instructions such as valves, pumps, process parameters, etc. and also the operator can call the system display related to procedure execution. If an operator clicks the system display link, the related system display popups on the right side monitor of CPS display. Lastly, CPS supports the synchronization of procedure among the operators. This synchronization function helps operators to succeed the goal of procedure and improve the situation

  4. A framework for the evaluation of new interventional procedures.

    Science.gov (United States)

    Lourenco, Tania; Grant, Adrian M; Burr, Jennifer M; Vale, Luke

    2012-03-01

    The introduction of new interventional procedures is less regulated than for other health technologies such as pharmaceuticals. Decisions are often taken on evidence of efficacy and short-term safety from small-scale usually observational studies. This reflects the particular challenges of evaluating interventional procedures - the extra facets of skill and training and the difficulty defining a 'new' technology. Currently, there is no framework to evaluate new interventional procedures before they become available in clinical practice as opposed to new pharmaceuticals. This paper proposes a framework to guide the evaluation of a new interventional procedure. A framework was developed consisting of a four-stage progressive evaluation for a new interventional procedure: Stage 1: Development; Stage 2: Efficacy and short-term safety; Stage 3: Effectiveness and cost-effectiveness; and Stage 4: Implementation. The framework also suggests the types of studies or data collection methods that can be used to satisfy each stage. This paper makes a first step on a framework for generating evidence on new interventional procedures. The difficulties and limitations of applying such a framework are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Model averaging in the analysis of leukemia mortality among Japanese A-bomb survivors

    International Nuclear Information System (INIS)

    Richardson, David B.; Cole, Stephen R.

    2012-01-01

    Epidemiological studies often include numerous covariates, with a variety of possible approaches to control for confounding of the association of primary interest, as well as a variety of possible models for the exposure-response association of interest. Walsh and Kaiser (Radiat Environ Biophys 50:21-35, 2011) advocate a weighted averaging of the models, where the weights are a function of overall model goodness of fit and degrees of freedom. They apply this method to analyses of radiation-leukemia mortality associations among Japanese A-bomb survivors. We caution against such an approach, noting that the proposed model averaging approach prioritizes the inclusion of covariates that are strong predictors of the outcome, but which may be irrelevant as confounders of the association of interest, and penalizes adjustment for covariates that are confounders of the association of interest, but may contribute little to overall model goodness of fit. We offer a simple illustration of how this approach can lead to biased results. The proposed model averaging approach may also be suboptimal as way to handle competing model forms for an exposure-response association of interest, given adjustment for the same set of confounders; alternative approaches, such as hierarchical regression, may provide a more useful way to stabilize risk estimates in this setting. (orig.)

  6. The Pajarito Site operating procedures for the Los Alamos Critical Experiments Facility

    International Nuclear Information System (INIS)

    Malenfant, R.E.

    1991-12-01

    Operating procedures consistent with DOE Order 5480.6, and the American National Standard Safety Guide for the Performance of Critical Experiments are defined for the Los Alamos Critical Experiments Facility (LACEF) of the Los Alamos National Laboratory. These operating procedures supersede and update those previously published in 1983 and apply to any criticality experiment performed at the facility. 11 refs

  7. Administrative Procedure Act and mass procedures (illustrated by the nuclear licensing procedure)

    International Nuclear Information System (INIS)

    Naumann, R.

    1977-01-01

    The report deals with the administrative procedure law of 25.5.76 of the Fed. Government, esp. with its meaning for the administrative procedures for the permission for nuclear power plants, as fas ar so-called mass procedures are concerned. (UN) [de

  8. Maze Procedures for Atrial Fibrillation, From History to Practice.

    Science.gov (United States)

    Kik, Charles; Bogers, Ad J J C

    2011-10-01

    Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented.

  9. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  10. Procedure for obtaining visas for Switzerland and France

    CERN Multimedia

    Relations with the Host States Service

    2011-01-01

    SIGNATURE RIGHTS - In accordance with the Status Agreements with CERN, Switzerland and France facilitate the entry of members of the Organization’s personnel on to their territories.  Where relevant, detailed procedures for obtaining visas apply. Within the framework of those procedures, only the following individuals are authorised to initiate the note verbale procedure as well as to sign the Official Invitation Letters and the Conventions d’accueil.  Kirsti ASPOLA (PH – CMO) Oliver BRÜNING (BE – ABP) Michelle CONNOR (PH – AGS) Patrick FASSNACHT (PH-ADO) David FOSTER (IT – DI) Nathalie GRÜB (PH – AGS) Tjitske KEHRER (DG-DI) Tadeusz KURTYKA (DG – PRJ) Cécile NOELS (DG – PRJ) Maria QUINTAS (HR – SPS) Kate RICHARDSON (PH-AGS) Jeanne ROSTANT (PH – AGS) José SALICIO-DIEZ (PH – AGS) Ulla TIHINEN (PH – A...

  11. Values of average daily gain of swine posted to commercial hybrids on pork in youth phase depending on the type

    Directory of Open Access Journals (Sweden)

    Diana Marin

    2013-10-01

    Full Text Available Values of average daily gain of weight are calculated according to the ratio of total growth and total number of days of feeding. In the case of the four commercial hybrids intensively exploited was observed, as test applied, that there were no statistically significant differences in terms of average daily gain of these hybrids, but the lowest values ​​of this index were recorded in hybrid B (with Large White as terminal boar.

  12. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    Energy Technology Data Exchange (ETDEWEB)

    Ciorsac, Alecu, E-mail: aleciorsac@yahoo.co [Politehnica University of Timisoara, Department of Physical Education and Sport, 2 P-ta Victoriei, 300006, Timisoara (Romania); Craciun, Dana, E-mail: craciundana@gmail.co [Teacher Training Department, West University of Timisoara, 4 Boulevard V. Pirvan, Timisoara, 300223 (Romania); Ostafe, Vasile, E-mail: vostafe@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania); Isvoran, Adriana, E-mail: aisvoran@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania)

    2011-04-15

    Research highlights: lights: We focus our study on the glycolytic enzymes. We reveal correlation of hydrophobicity and flexibility along their chains. We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. The glycolytic enzyme sequences are not random. Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  13. Fingertip replantation using the subdermal pocket procedure.

    Science.gov (United States)

    Lin, Tsan-Shiun; Jeng, Seng-Feng; Chiang, Yuan-Cheng

    2004-01-01

    Restoration of finger length and function are the goals of replantation after fingertip amputation. Methods include microsurgical replantation and nonmicrosurgical replantation, such as composite graft techniques. To increase the survival rates for composite grafts, the subcutaneous pocket procedure has been used as a salvage procedure. The subdermal pocket procedure, which is a modification of the subcutaneous pocket procedure, was used for replantation of 17 fingertips in 16 consecutive patients. Eight fingertips experienced guillotine injuries and the other nine fingertips experienced crush injuries. Revascularization of one digital artery without available venous outflow was performed for six fingers, and composite graft techniques were used for the other 11 fingers. The success rate was 16 of 17 cases. The difference in success rates for guillotine versus crush injuries was statistically significant. Comparison of patients with arterial anastomoses and patients without arterial anastomoses also indicated a statistically significant difference. Thirteen fingertips survived completely. One finger, demonstrating complete loss and early termination of the pocketing procedure, was amputated on the eighth postoperative day. Two fingers were partially lost because of severe crushing injuries. One finger demonstrated partial loss of more than one quarter of the fingertip, which required secondary revision, because the patient was a heavy smoker. The pocketing period was 8 +/- 1 days (mean +/- SD, n = 6) for the fingers revascularized with one digital arterial anastomosis and 13.3 +/- 1.9 days (n = 10) for the fingers successfully replanted with composite graft techniques. The mean active range of motion of the interphalangeal joint of the three thumbs was 65 +/- 5 degrees, and that of the distal interphalangeal joint of the other 11 fingers was 51 +/- 11 degrees. The static two-point discrimination result was 6.4 +/- 1.0 mm (n = 14) after an average of 11 +/- 5 months

  14. Experimental modelling of fragmentation applied to volcanic explosions

    Science.gov (United States)

    Haug, Øystein Thordén; Galland, Olivier; Gisler, Galen R.

    2013-12-01

    Explosions during volcanic eruptions cause fragmentation of magma and host rock, resulting in fragments with sizes ranging from boulders to fine ash. The products can be described by fragment size distributions (FSD), which commonly follow power laws with exponent D. The processes that lead to power-law distributions and the physical parameters that control D remain unknown. We developed a quantitative experimental procedure to study the physics of the fragmentation process through time. The apparatus consists of a Hele-Shaw cell containing a layer of cohesive silica flour that is fragmented by a rapid injection of pressurized air. The evolving fragmentation of the flour is monitored with a high-speed camera, and the images are analysed to obtain the evolution of the number of fragments (N), their average size (A), and the FSD. Using the results from our image-analysis procedure, we find transient empirical laws for N, A and the exponent D of the power-law FSD as functions of the initial air pressure. We show that our experimental procedure is a promising tool for unravelling the complex physics of fragmentation during phreatomagmatic and phreatic eruptions.

  15. Self-Reported Emergency Medicine Residency Applicant Attitudes Towards a Procedural Cadaver Laboratory Curriculum

    Directory of Open Access Journals (Sweden)

    Hoffman, Lance

    2008-08-01

    Full Text Available OBJECTIVE: Residency applicants consider a variety of factors when ranking emergency medicine (EM programs for their NRMP match list. A human cadaver emergency procedure lab curriculum is uncommon. We hypothesized that the presence this curriculum would positively impact the ranking of an EM residency program.METHODS: The EM residency at Nebraska Medical Center is an urban, university-based program with a PGY I-III format. Residency applicants during the interview for a position in the PGY I class of 2006 were surveyed by three weekly electronic mailings. The survey was distributed in March 2006 after the final NRMP match results were released. The survey explored learner preferences and methodological commonality of models of emergency procedural training, as well as the impact of a procedural cadaver lab curriculum on residency ranking. ANOVA of ranks was used to compare responses to ranking questions.RESULTS: Of the 73 potential subjects, 54 (74% completed the survey. Respondents ranked methods of procedural instruction from 1 (most preferred or most common technique to 4 (least preferred or least common technique. Response averages and 95% confidence intervals for the preferred means of learning a new procedure are as follows: textbook (3.69; 3.51-3.87, mannequin (2.83; 2.64-3.02, human cadaver (1.93; 1.72-2.14, and living patient (1.56; 1.33-1.79. Response averages for the commonality of means used to teach a new procedure are as follows: human cadaver (3.63; 3.46-3.80, mannequin (2.70; 2.50-2.90, living patient (2.09; 1.85-2.33, and textbook (1.57; 1.32-1.82. When asked if the University of Nebraska Medical Center residency ranked higher in the individual's match list because of its procedural cadaver lab, 14.8% strongly disagreed, 14.8% disagreed, 40.7% were neutral, 14.8% agreed, and 14.8% strongly agreed.CONCLUSION: We conclude that, although cadaveric procedural training is viewed by senior medical student learners as a desirable means

  16. How to successfully implement a robotic pediatric surgery program: lessons learned after 96 procedures.

    Science.gov (United States)

    de Lambert, Guénolée; Fourcade, Laurent; Centi, Joachim; Fredon, Fabien; Braik, Karim; Szwarc, Caroline; Longis, Bernard; Lardy, Hubert

    2013-06-01

    Both our teams were the first to implement pediatric robotic surgery in France. The aim of this study was to define the key points we brought to light so other pediatric teams that want to set up a robotic surgery program will benefit. We reviewed the medical records of all children who underwent robotic surgery between Nov 2007 and June 2011 in both departments, including patient data, installation and changes, operative time, hospital stay, intraoperative complications, and postoperative outcome. The department's internal organization, the organization within the hospital complex, and cost were evaluated. A total of 96 procedures were evaluated. There were 38 girls and 56 boys with average age at surgery of 7.6 years (range, 0.7-18 years) and average weight of 26 kg (range, 6-77 kg). Thirty-six patients had general surgery, 57 patients urologic surgery, and 1 thoracic surgery. Overall average operative time was 189 min (range, 70-550 min), and average hospital stay was 6.4 days (range, 2-24 days). The procedures of 3 patients were converted. Median follow-up was 18 months (range, 0.5-43 months). Robotic surgical procedure had an extra cost of 1934 compared to conventional open surgery. Our experience was similar to the findings described in the literature for feasibility, security, and patient outcomes; we had an overall operative success rate of 97 %. Three main actors are concerned in the implementation of a robotic pediatric surgery program: surgeons and anesthetists, nurses, and the administration. The surgeon is at the starting point with motivation for minimally invasive surgery without laparoscopic constraints. We found that it was possible to implement a long-lasting robotic surgery program with comparable quality of care.

  17. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  18. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  19. Solution and study of nodal neutron transport equation applying the LTS{sub N}-DiagExp method

    Energy Technology Data Exchange (ETDEWEB)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta [Pontificia Univ. Catolica do Rio Grande do Sul, Porto Alegre, RS (Brazil). Faculdade de Matematica]. E-mail: eliete@pucrs.br; rpp@mat.pucrs.br; Vilhena, Marco Tullio de [Pontificia Univ. Catolica do Rio Grande do Sul, Porto Alegre, RS (Brazil). Instituto de Matematica]. E-mail: vilhena@mat.ufrgs.br; Barros, Ricardo Carvalho de [Universidade do Estado, Nova Friburgo, RJ (Brazil). Instituto Politecnico]. E-mail: ricardo@iprj.uerj.br

    2003-07-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S{sub N} equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS{sub N} method, first applying the Laplace transform to the set of the nodal S{sub N} equations and then obtained the solution by symbolic computation. We include the LTS{sub N} method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS{sub N} approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  20. Applied Behavior Analysis Is a Science And, Therefore, Progressive

    Science.gov (United States)

    Leaf, Justin B.; Leaf, Ronald; McEachin, John; Taubman, Mitchell; Ala'i-Rosales, Shahla; Ross, Robert K.; Smith, Tristram; Weiss, Mary Jane

    2016-01-01

    Applied behavior analysis (ABA) is a science and, therefore, involves progressive approaches and outcomes. In this commentary we argue that the spirit and the method of science should be maintained in order to avoid reductionist procedures, stifled innovation, and rote, unresponsive protocols that become increasingly removed from meaningful…

  1. Reliability analysis applied to structural tests

    Science.gov (United States)

    Diamond, P.; Payne, A. O.

    1972-01-01

    The application of reliability theory to predict, from structural fatigue test data, the risk of failure of a structure under service conditions because its load-carrying capability is progressively reduced by the extension of a fatigue crack, is considered. The procedure is applicable to both safe-life and fail-safe structures and, for a prescribed safety level, it will enable an inspection procedure to be planned or, if inspection is not feasible, it will evaluate the life to replacement. The theory has been further developed to cope with the case of structures with initial cracks, such as can occur in modern high-strength materials which are susceptible to the formation of small flaws during the production process. The method has been applied to a structure of high-strength steel and the results are compared with those obtained by the current life estimation procedures. This has shown that the conventional methods can be unconservative in certain cases, depending on the characteristics of the structure and the design operating conditions. The suitability of the probabilistic approach to the interpretation of the results from full-scale fatigue testing of aircraft structures is discussed and the assumptions involved are examined.

  2. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Throneburg, E. B.; Jones, J. M. [AREVA NP Inc., 7207 IBM Drive, Charlotte, NC 28262 (United States)

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  3. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    International Nuclear Information System (INIS)

    Throneburg, E. B.; Jones, J. M.

    2006-01-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  4. Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.

    Science.gov (United States)

    Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David

    2012-01-01

    Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  5. Office-Based Deep Sedation for Pediatric Ophthalmologic Procedures Using a Sedation Service Model

    Directory of Open Access Journals (Sweden)

    Kirk Lalwani

    2012-01-01

    Full Text Available Aims. (1 To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2 To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62–100. There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  6. Flight Control of Biomimetic Air Vehicles Using Vibrational Control and Averaging

    Science.gov (United States)

    Tahmasian, Sevak; Woolsey, Craig A.

    2017-08-01

    A combination of vibrational inputs and state feedback is applied to control the flight of a biomimetic air vehicle. First, a control strategy is developed for longitudinal flight, using a quasi-steady aerodynamic model and neglecting wing inertial effects. Vertical and forward motion is controlled by modulating the wings' stroke and feather angles, respectively. Stabilizing control parameter values are determined using the time-averaged dynamic model. Simulations of a system resembling a hawkmoth show that the proposed controller can overcome modeling error associated with the wing inertia and small parameter uncertainties when following a prescribed trajectory. After introducing the approach through an application to longitudinal flight, the control strategy is extended to address flight in three-dimensional space.

  7. A comparison of vowel normalization procedures for language variation research

    Science.gov (United States)

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .

  8. Using a standards committee to design practical procedure system improvements

    International Nuclear Information System (INIS)

    Grider, D.A.; Plung, D.

    1993-01-01

    In the post-Three Mile Island (TMI) environment, numerous reports have been issued on how to improve the quality of procedures used at government and commercial nuclear facilities. The studies tend to be long on what is wrong with existing procedures and short on practical directions on how to fix those faults. Few of these studies have been conducted by practitioners with full-time procedure-managing or procedure writing experience. None of these studies go into detail on how to improve the procedure system itself. Over the last 10 yr, various nuclear facilities within the US Department of Energy (DOE) have carried out individual programs to develop procedures that meet post-TMI standards. However, ∼2 yr ago, DOE formed a Procedures Standards Committee to advise DOE in developing a set of post-TMI guidelines that could be consistently applied throughout all DOE nuclear facilities. The committee has achieved not only its original mission by producing a series of integrated guidance documents but has also evolved a systems approach to procedures management that sets new standards for procedure quality and efficiency. As members of this committee, the authors want to describe what has made the group's approach so successful. The lessons learned may be translatable to a wide range of government and commercial industry procedure programs

  9. Improving L2 Reading Comprehension through Emotionalized Dynamic Assessment Procedures.

    Science.gov (United States)

    Abdolrezapour, Parisa

    2017-06-01

    The paper reports a study on an emotionally-loaded dynamic assessment procedure used with Iranian EFL learners. It focuses on the effect of using emotional intelligence characteristics (based on Goleman's framework) as a tool for motivating learners while performing reading tasks. The study with 50 intermediate learners aged 12-15 used three modalities: a control group, which was taught under institute's normal procedures; a comparison group, which received dynamic assessment (DA); and an experimental group, which received emotionalized dynamic assessment (EDA) procedures, in the form of an intervention focusing on characteristics of Goleman's emotional intelligence framework with the express purpose of inducing them to work with their emotions. Results showed that applying EDA procedures to reading assessment tasks made a difference in learners' level of performance in comparison to those who went through pure DA procedures who in turn performed significantly better than those who did not received DA in any form.

  10. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  11. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  12. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  13. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  14. Meta-analysis in applied ecology.

    Science.gov (United States)

    Stewart, Gavin

    2010-02-23

    This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.

  15. Analysis of risk in computerized tomography and other diagnostic radiology procedures

    International Nuclear Information System (INIS)

    Mossman, K.L.

    1982-01-01

    Medical practice entails continuous risks to the patient taken in good faith by the physician for the benefit of the patient. Risk of radiation induced cancer death approximates 10(-4) per cGy (rad). Assuming an average whole body dose of 0.1 cGy for many diagnostic X-ray procedures, the probability of radiation-induced cancer death is about 10(-5). The purpose of this paper is to compare the risks of common diagnostic X-ray procedures including computerized tomography (CT) with risks of smoking or automobile travel. Such comparisons should be constructive in putting radiation in perspective and facilitating explanation of risk/benefit to patients

  16. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  17. Measurement uncertainties of long-term 222Rn averages at environmental levels using alpha track detectors

    International Nuclear Information System (INIS)

    Nelson, R.A.

    1987-01-01

    More than 250 replicate measurements of outdoor Rn concentration integrated over quarterly periods were made to estimate the random component of the measurement uncertainty of Track Etch detectors (type F) under outdoor conditions. The measurements were performed around three U mill tailings piles to provide a range of environmental concentrations. The measurement uncertainty was typically greater than could be accounted for by Poisson counting statistics. Average coefficients of variation of the order of 20% for all measured concentrations were found. It is concluded that alpha track detectors can be successfully used to determine annual average outdoor Rn concentrations through the use of careful quality control procedures. These include rapid deployment and collection of detectors to minimize unintended Rn exposure, careful packaging and shipping to and from the manufacturer, use of direct sunlight shields for all detectors and careful and secure mounting of all detectors in as similar a manner as possible. The use of multiple (at least duplicate) detectors at each monitoring location and an exposure period of no less than one quarter are suggested

  18. Applications of a mapping procedure to nuclear and nuclear structure problems

    International Nuclear Information System (INIS)

    Catara, F.; Sambataro, M.

    1993-01-01

    A mapping procedure which has applied both for the deriving boson images of fermion operators and for constructing nucleon images of quark operators within the framework of nonrelativistic quark model is illustrated

  19. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  20. Dental Procedures in Primary Health Care of the Brazilian National Health System

    Directory of Open Access Journals (Sweden)

    Suellen R. Mendes

    2017-12-01

    Full Text Available The aim of this study was to examine the procedures of primary dental health care performed by oral health teams (OHTs adhering to the second cycle of the ‘National Programme for Improving Access and Quality of Primary Care’ (PMAQ-AB in Brazil. A cross-sectional descriptive analysis was performed, across 23 dental procedures comprising preventive, restorative/prosthetic, surgical, endodontic and oral cancer monitoring. Descriptive analysis shows that most of the oral health teams carry out basic dental procedures. However, most of the time, they do not keep adequate records of suspected cases of oral cancer, diagnosis tests or follow-ups, and do not perform dental prosthetic procedures. Data also showed disparities in the average number of procedures performed in each Brazilian geographical region in 2013–2014, ranging from 13.9 in the northern to 16.5 in the southern and south-eastern regions, reinforcing the great social disparities between them. Brazilian regions with the highest volume of dental need deliver the lowest number of dental procedures. The need to tackle inequalities and further shape the supply of appropriate primary health care (PHC is evident.