WorldWideScience

Sample records for average procedures applied

  1. Highly efficient sparse-matrix inversion techniques and average procedures applied to collisional-radiative codes

    International Nuclear Information System (INIS)

    The behavior of non-local thermal equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser- produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, and X-ray sources. In steady-state cases the proper description of these plasmas may require the solution of thousands of linear rate equations. A possible simplification for this numerical task lies in some form of statistical averaging, such as the averaging over configurations or super-configurations. However, to assess the validity of such an averaging procedure and to handle cases where isolated lines play an important role, it will be necessary to treat detailed levels systems. This involves matrices with potentially billions of elements, which are rather sparse but still involve thousands of diagonals above and below the main one. We propose here a numerical algorithm based on the LU decomposition for such linear systems. It will be shown that this method is orders of magnitude faster than the traditional Gauss elimination. Moreover, it is found that there are no convergence or accuracy issues, which are found when using methods based on conjugate gradients or minimization. Among cases treated at the last NLTE-kinetics-code meeting, krypton and tungsten plasmas are considered. Furthermore, to assess the validity of configuration averaging, several criteria are discussed. While a criterion based on detailed balance is relevant in cases not too far from LIE, it is found to be insufficient in general. An alternate criterion based on the inspection of the influence of an arbitrary configuration temperature is proposed and tested successfully. (authors)

  2. Highly efficient sparse-matrix inversion techniques and average procedures applied to collisional-radiative codes

    Science.gov (United States)

    Poirier, M.; de Gaufridy de Dortan, F.

    2009-12-01

    The behavior of non-local thermal equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser- produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, and X-ray sources. In steady-state cases the proper description of these plasmas may require the solution of thousands of linear rate equations. A possible simplification for this numerical task lies in some form of statistical averaging, such as the averaging over configurations or superconfigurations. However, to assess the validity of such an averaging procedure and to handle cases where isolated lines play an important role, it will be necessary to treat detailed levels systems. This involves matrices with potentially billions of elements, which are rather sparse but still involve thousands of diagonals above and below the main one. We propose here a numerical algorithm based on the LU decomposition for such linear systems. It will be shown that this method is orders of magnitude faster than the traditional Gauss elimination. Moreover, it is found that there are no convergence or accuracy issues, which are found when using methods based on conjugate gradients or minimization. Among cases treated at the last NLTE-kinetics-code meeting, krypton and tungsten plasmas are considered. Furthermore, to assess the validity of configuration averaging, several criteria are discussed. While a criterion based on detailed balance is relevant in cases not too far from LTE, it is found to be insufficient in general. An alternate criterion based on the inspection of the influence of an arbitrary configuration temperature is proposed and tested successfully.

  3. Highly efficient sparse-matrix inversion techniques and average procedures applied to collisional-radiative codes

    CERN Document Server

    Poirier, M

    2009-01-01

    The behavior of non-local thermal-equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser-produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, or X-ray sources. The proper description of these media in stationary cases requires to solve linear systems of thousands or more rate equations. A possible simplification for this arduous numerical task may lie in some type of statistical average, such as configuration or superconfiguration average. However to assess the validity of this procedure and to handle cases where isolated lines play an important role, it may be important to deal with detailed levels systems. This involves matrices with sometimes billions of elements, which are rather sparse but still involve thousands of diagonals. We propose here a numerical algorithm based on the LU decomposition for such linear systems. This method turns out to be orders of magnitude faster than the traditional Gauss elimination. And at variance with ...

  4. An averaging procedure for applying the Revised Universal Soil Loss Equation (RUSLE) to disturbed mountain watersheds

    OpenAIRE

    González Bonorino, G.; Osterkamp, W. R.; Colombo Piñol, Ferrán

    2002-01-01

    Disturbed lands in mountain watersheds may be a significant source of sediment. A systematic rating of their potential for erosion would be useful in soil conservation planning. RUSLE is a successful erosion-prediction technique, well tested on gentle slopes of agricultural lands. In view of its success, attempts have been made to apply RUSLE to areas of complex topography by substituting upstream contributing area for the linear-flow model embodied in the RUSLE L-factor. This substitution le...

  5. Averaging procedure in variable-G cosmologies

    CERN Document Server

    Cardone, Vincenzo F

    2008-01-01

    Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the non-perturbative renormalization program for quantum gravity based upon the Einstein--Hilbert action. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and all equations involving contributions of a variable Newton parameter are worked out in detail. Interestingly, under suitable assumptions, an approximate solution can be found where the universe tends to a FLRW model, while keeping track of the original inhomogeneities through two effective fluids.

  6. Ensemble averaging applied to the flow of a multiphase mixture

    International Nuclear Information System (INIS)

    Ensemble averaging theorems are used to derive a two-fluid model describing the flow of a dilute fluid-solid mixture. The model is valid for mixtures containing particles that are small compared to the length scales describing variations in ensemble-averaged field quantities, such as fluid or particle phase density, pressure or velocity. For the case where the mixture is pseudo-homogeneous, the equations obtained reproduce the Einstein viscosity correction

  7. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  8. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Science.gov (United States)

    Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.

    2007-12-01

    We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].

  9. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Heneghan C

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as -means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of and specificity of .

  10. Time averaging procedure for calculating the mass and energy transfer rates in adiabatic two phase flow

    International Nuclear Information System (INIS)

    To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.)

  11. Effects of measurement procedure and equipment on average room acoustic measurements

    DEFF Research Database (Denmark)

    Gade, Anders Christian; Bradley, J S; Siebein, G W

    1993-01-01

    . In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences......This paper reports the results of a measurement tour of nine U.S. concert halls. Three measurements teams, from the University of Florida, the National Research Council of Canada, and the Technical University of Denmark, made parallel sets of measurements using their own equipment and procedures...

  12. Nevada Applied Ecology Group procedures handbook for environmental transuranics

    International Nuclear Information System (INIS)

    The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and other biological material. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications more complex than the routine standard sampling procedures utilized

  13. Nevada Applied Ecology Group procedures handbook for environmental transuranics

    International Nuclear Information System (INIS)

    The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and others. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications perhaps more complex than the routine standard sampling procedures utilized

  14. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  15. On the peeling procedure applied to a Poisson point process

    OpenAIRE

    Davydov, Y.; Nagaev, A.; Philippe, A

    2010-01-01

    In this paper we focus on the asymptotic properties of the sequence of convex hulls which arise as a result of a peeling procedure applied to the convex hull generated by a Poisson point process. Processes of the considered type are tightly connected with empirical point processes and stable random vectors. Results are given about the limit shape of the convex hulls in the case of a discrete spectral measure. We give some numerical experiments to illustrate the peeling proce...

  16. A loudness calculation procedure applied to shaped sonic booms

    Science.gov (United States)

    Shepherd, Kevin P.; Sullivan, Brenda M.

    1991-01-01

    Described here is a procedure that can be used to calculate the loudness of sonic booms. The procedure is applied to a wide range of sonic booms, both classical N-waves and a variety of other shapes of booms. The loudness of N-waves is controlled by overpressure and the associated rise time. The loudness of shaped booms is highly dependent on the characteristics of the initial shock. A comparison of the calculated loudness values indicates that shaped booms may have significantly reduced loudness relative to N-waves having the same peak overpressure. This result implies that a supersonic transport designed to yield minimized sonic booms may be substantially more acceptable than an unconstrained design.

  17. The method of averaging applied to pharmacokinetic/pharmacodynamic indirect response models.

    Science.gov (United States)

    Dunne, Adrian; de Winter, Willem; Hsu, Chyi-Hung; Mariam, Shiferaw; Neyens, Martine; Pinheiro, José; Woot de Trixhe, Xavier

    2015-08-01

    The computational effort required to fit the pharmacodynamic (PD) part of a pharmacokinetic/pharmacodynamic (PK/PD) model can be considerable if the differential equations describing the model are solved numerically. This burden can be greatly reduced by applying the method of averaging (MAv) in the appropriate circumstances. The MAv gives an approximate solution, which is expected to be a good approximation when the PK profile is periodic (i.e. repeats its values in regular intervals) and the rate of change of the PD response is such that it is approximately constant over a single period of the PK profile. This paper explains the basis of the MAv by means of a simple mathematical derivation. The NONMEM® implementation of the MAv using the abbreviated FORTRAN function FUNCA is described and explained. The application of the MAv is illustrated by means of an example involving changes in glycated hemoglobin (HbA1c%) following administration of canagliflozin, a selective sodium glucose co-transporter 2 inhibitor. The PK/PD model applied to these data is fitted with NONMEM® using both the MAv and the standard method using a numerical differential equation solver (NDES). Both methods give virtually identical results but the NDES method takes almost 8 h to run both the estimation and covariance steps, whilst the MAv produces the same results in less than 30 s. An outline of the NONMEM® control stream and the FORTRAN code for the FUNCA function is provided in the appendices. PMID:26142076

  18. One dimensional averaged (K-ε) turbulence model applied to channel, pipe and rod bundle flows

    International Nuclear Information System (INIS)

    Full text of publication follows: In order to calculate accurately the flows that can occur in the wide and complex geometry of nuclear reactor, using reasonable means that do not calculate every part of the whole circuit with 3D thin meshes, one possible way is to use simultaneously several numerical tools dedicated to the different parts of the circuit. As an example, the lower plenum could be calculated with 3D CFD tool while rod bundle part could be calculated with porous 3D code and pipes could be calculated with one dimensional code. This approach requires the coupling and the consistency of all these tools. As part of this project, work is in progress to develop a one dimensional averaged (k-ε) turbulence model that is coherent with the standard 3D (k-ε) model. This consistency is required to consider the one-way or two-way coupling between 1D and 3D calculations. The development of a turbulence model suitable for both pipe and charged medium such as rod bundles requires a porous media formulation for the equations governing the turbulent flow. The volume-averaging theory (VAT) is thus applied to the standard (k-ε) turbulence model equations. It gives a new system of equations for the transport of the volumetric averaged turbulence kinetic energy and its dissipation rate. Those new equations are different from the original ones; they involve additional terms which come from the averaging process for porous model and that correspond to internal production/dissipation mechanisms. Currently, there is no general well-developed closure expression for these additional terms, even if Nakayama et al. proposed a closure method for the one appearing in the averaged (k-ε) transport equations in the case of regular morphologies. These unknown terms due to the porous matrix are modeled collectively by these authors in order to obtain correlations adapted to classical porous media. Following these ideas, we propose here a closure expression adapted to channel, pipe

  19. Applying computer-based procedures in nuclear power plants

    International Nuclear Information System (INIS)

    Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)

  20. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  1. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  2. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  3. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  4. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  5. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  6. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  7. State-averaged Monte Carlo configuration interaction applied to electronically excited states

    CERN Document Server

    Coe, J P

    2014-01-01

    We introduce state-averaging into the method of Monte Carlo configuration interaction (SA-MCCI) to allow the stable and efficient calculation of excited states. We show that excited potential curves for H$_{3}$, including a crossing with the ground state, can be accurately reproduced using a small fraction of the FCI space. A recently introduced error measure for potential curves [J. P. Coe and M. J. Paterson, J. Chem. Phys., 137, 204108 (2012)] is shown to also be a fair approach when considering potential curves for multiple states. We demonstrate that potential curves for LiF using SA-MCCI agree well with the FCI results and the avoided crossing occurs correctly. The seam of conical intersections for CH$_{2}$ found by Yarkony [J. Chem. Phys., 104, 2932 (1996)] is used as a test for SA-MCCI and we compare potential curves from SA-MCCI with FCI results for this system for the first three triplet states. We then demonstrate the improvement from using SA-MCCI on the dipole of the $2$ $^{1}A_{1}$ state of carbo...

  8. 34 CFR 370.43 - What requirement applies to the use of mediation procedures?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures...

  9. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  10. Applying Backus averaging for deriving seismic anisotropy of a long-wavelength equivalent medium from well-log data

    International Nuclear Information System (INIS)

    Backus averaging is often applied to well logs in order to estimate the elastic properties of a stack of thin layers at a longer wavelength to be used in seismic data analysis. Schoenberg and Muir extended Backus's method to layers with arbitrary anisotropy, but simple analytical equations are available for isotropic and transversely isotropic layers only. There is growing exploration interest in shale reservoirs, which exhibit lower symmetry seismic anisotropy if the reservoir is fractured. Here, simple analytical equations for Backus averaging are presented for symmetries up to monoclinic, of which averaging equations for orthorhombic and monoclinic layers are new. After Backus averaging, a stack of isotropic layers results in a vertical transverse isotropy (VTI), a stack of VTI layers results in a VTI, a stack of horizontal transverse isotropy layers results in an orthorhombic, a stack of orthorhombic layers results in an orthorhombic and a stack of monoclinic layers results in a monoclinic. Workflows for Backus averaging are presented along with numerical examples using well-log data from the Haynesville shale play in East Texas, USA. (paper)

  11. Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process

    Science.gov (United States)

    Motley, Albert E., III

    2000-01-01

    One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.

  12. 21 CFR 1315.22 - Procedure for applying for individual manufacturing quotas.

    Science.gov (United States)

    2010-04-01

    ... manufacturing quotas. 1315.22 Section 1315.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF... Individual Manufacturing Quotas § 1315.22 Procedure for applying for individual manufacturing quotas. Any... desires to manufacture a quantity of the chemical must apply on DEA Form 189 for a manufacturing quota...

  13. A Flexible Boundary Procedure for Hyperbolic Problems: Multiple Penalty Terms Applied in a Domain

    OpenAIRE

    Nordström, Jan; Abbas, Qaisar; Erickson, Brittany A.; Frenander, Hannes

    2014-01-01

    A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied near boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new boundary...

  14. A Flexible Far Field Boundary Procedure for Hyperbolic Problems: Multiple Penalty Terms Applied in a Domain

    OpenAIRE

    Nordström, Jan; Abbas, Qaisar; A. Erickson, Brittany; Frenander, Hannes

    2013-01-01

    A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied at far field boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new ...

  15. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT

    International Nuclear Information System (INIS)

    In this document the quality control procedures applied to the CMS muon drift chambers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chamber test handbook for beginners. (Author) 3 refs

  16. Influence of the surface averaging procedure of the current density in assessing compliance with the ICNIRP low-frequency basic restrictions by means of numerical techniques

    Science.gov (United States)

    Zoppetti, N.; Andreuccetti, D.

    2009-08-01

    Although the calculation of the surface average of the low-frequency current density distribution over a cross-section of 1 cm2 is required by ICNIRP guidelines, no reference averaging algorithm is indicated, neither in the ICNIRP guidelines nor in the Directive 2004/40/EC that is based on them. The lack of a general standard algorithm that fulfils the ICNIRP guidelines' requirements is particularly critical in the prospective of the 2004/40/EC Directive endorsement, since the compliance to normative limits refers to well-defined procedures. In this paper, two case studies are considered, in which the calculation of the surface average is performed using a simplified approach widely used in the literature and an original averaging procedure. This analysis, aimed at quantifying the expected differences and to single out their sources, shows that the choice of the averaging algorithm represents an important source of uncertainty in the application of the guideline requirements.

  17. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    Science.gov (United States)

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  18. 21 CFR 1303.22 - Procedure for applying for individual manufacturing quotas.

    Science.gov (United States)

    2010-04-01

    ... manufacturing quotas. 1303.22 Section 1303.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE QUOTAS Individual Manufacturing Quotas § 1303.22 Procedure for applying for individual manufacturing quotas. Any person who is registered to manufacture any basic class of controlled substance...

  19. Applying the conventional moving average filter for estimation of low radiation doses using EPR spectroscopy: Benefits and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Radiation Dosimetry Department, Ministry of Scientific Research, Tersa Street, P.O. Box 136, Giza, Haram 12211 (Egypt); Physics Department, Faculty of Science and Humanities, Salman Bin AbdulAziz University, Alkharj (Saudi Arabia)

    2014-02-11

    Alanine/EPR is the most common dosimetry system for high radiation doses because of its high stability and wide linear response, however, use of alanine in most of medical applications still require special sophisticated methodologies and techniques in order to extend alanine detection limit to low levels of radiation doses. One of these techniques is the use of digital processing of acquired alanine spectra for enhancing useful components in spectra while useless features are suppressed. Simple moving average filter (MA) impacts on alanine EPR spectra have been studied in terms of peak-to-peak height, peak-to-peak line width, and associated uncertainty. Three types of the used filter were investigated: upward MA, central MA, and downward MA filters, effects of each on the peak position for different values of filter width were studied. It was found that MA filter always lead to the reduction in signal intensity and the increase of line width of the central peak of alanine spectrum. Peak position also changes in cases of the upward MA and downward MA filters while no significant changes were observed in the case of central MA. Uncertainties associated to the averaging process were evaluated and plotted versus the filter width resulting in a linear relationship. Filter width value should be carefully selected in order to avoid probable distortion in processed spectra while gaining less noisy spectra with less associated uncertainties.

  20. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    structure evaluation by assessing the local identifiability characteristics of the parameters. Moreover, such a procedure should be generic to make sure it can be applied independent from the structure of the model. We hereby apply a numerical identifiability approach which is based on the work of Walter...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring....... In contrast, the practical identifiability analysis revealed that high values of the forward rate parameter Vf led to identifiability problems. These problems were even more pronounced athigher substrate concentrations, which illustrates the importance of a proper experimental designto avoid...

  1. Containment integrity and leak testing. Procedures applied and experiences gained in European countries

    International Nuclear Information System (INIS)

    Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries

  2. The Safety Assessment of OPR-1000 for Station Blackout Applying Combined Deterministic and Probabilistic Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dong Gu; Ahn, Seung-Hoon; Cho, Dae-Hyung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    This is termed station blackout (SBO). However, it does not generally include the loss of available AC power to safety buses fed by station batteries through inverters or by alternate AC sources. Historically, risk analysis results have indicated that SBO was a significant contributor to overall core damage frequency. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident, which is a typical beyond design basis accident and important contributor to overall plant risk, is performed by applying the combined deterministic and probabilistic procedure (CDPP). In addition, discussions are made for reevaluation of SBO risk at OPR-1000 by eliminating excessive conservatism in existing PSA. The safety assessment of OPR-1000 for SBO accident, which is a typical BDBA and significant contributor to overall plant risk, was performed by applying the combined deterministic and probabilistic procedure. However, the reference analysis showed that the CDF and CCDP did not meet the acceptable risk, and it was confirmed that the SBO risk should be reevaluated. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it was demonstrated that the proposed CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.

  3. Validation procedures of software applied in nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    The IAEA has supported the availability of well functioning nuclear instruments in Member States over more than three decades. Some older or aged instruments are still being used and are still in good working condition. However, those instruments may not meet modern software requirements for the end-user in all cases. Therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. New advanced software is not only applied in case of new instrumentation, but often also for new and improved applications of modernized and/or refurbished instruments in many Member States for which in few cases the IAEA also provided support. Modern software applied in nuclear instrumentation plays a key role for their safe operation and execution of commands in a user friendly manner. Correct data handling and transfer has to be ensured. Additional features such as data visualization, interfacing to PC for control and data storage are often included. To finalize the task, where new instrumentation which is not commercially available is used, or aged instruments are modernized/refurbished, the applied software has to be verified and validated. A Technical Meeting on 'Validation Procedures of Software Applied in Nuclear Instruments' was organized in Vienna, 20-23 November 2006, to discuss the verification and validation process of software applied to operation and use of nuclear instruments. The presentations at the technical meeting included valuable information, which has been compiled and summarized in this publication, which should be useful for technical staff in Member States when modernizing/refurbishing nuclear instruments. 22 experts in the field of modernization/refurbishment of nuclear instruments as well as users of applied software presented their latest results. Discussion sessions followed the presentations. This publication is the outcome of deliberations during the meeting

  4. Robust solution procedure for the discrete energy-averaged model on the calculation of 3D hysteretic magnetization and magnetostriction of iron–gallium alloys

    International Nuclear Information System (INIS)

    A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data. - Highlights: • The discrete energy-averaged model for Galfenol is reformulated. • An analytical solution for 3D magnetostriction and magnetization is developed from eigenvalue decomposition. • Improved robustness is achieved. • An efficient optimization routine is developed to identify parameters from averaged hysteresis curves. • The effectiveness of the model is demonstrated against experimental data

  5. Quality control procedures applied to nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    Quality Control (QC), test procedures for Nuclear Instrumentation are important for assurance of proper and safe operation of the instruments, especially with regard to equipment related to radiological safety, human health and national safety. Correct measurements of radiation parameters must be ensured, i.e., accurate measurement of the number of radioactive events, counting times and in some cases accurate measurements of the radiation energy and occurring time of the nuclear events. There are several kinds of testing on nuclear instruments, for example, type-testing done by suppliers, acceptance testing made by the end users, Quality Control tests after repair and Quality Assurance/Quality Controls tests made by end-users. All of these tests are based in many cases on practical guidelines or on the experience of the own specialist, the available standards on this topic also need to be adapted to specific instruments. The IAEA has provided nuclear instruments and supported the operational maintenance efforts of the Member States. Although Nuclear Instrumentation is continuously upgraded, some older or aged instruments are still in use and in good working condition. Some of these instruments may not, however, meet modern requirements for the end-user therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. As a result, new instrumentation which is not commercially available, or modernized/refurbished instruments, need to be tested or verified with QC procedures to meet national or international certification requirements. A technical meeting on QC procedures applied to nuclear instruments was organized in Vienna from 23 to 24 August 2007. Existing and required QC test procedures necessary for the verification of operation and measurement of the main characteristics of nuclear instruments was the focus of discussion at this meeting. Presentations made at the technical meeting provided

  6. Evaluation of the BCR sequential extraction procedure applied for two unpolluted Spanish soils

    International Nuclear Information System (INIS)

    The procedure of BCR sequential extraction has been applied to five samples from two unpolluted soils in southern Spain. Total concentrations of different elements have been calculated as the sum of the three fractions of BCR and the residue has been measured for each. Also, a total analysis based on INAA or total-digestion techniques has been performed for the same samples. BCR and total analysis closely agreed for As, Pb and Cd. For Cu, Co, Cr and Zn the comparison of the results did not provide definitive conclusions concerning the capability of BCR in measuring total concentrations. On the other hand, in these cases, a certain correlation was found between the concentrations measured and some soil characteristics, especially the clay, organic-matter and CaCO3 contents. BCR proved incapable of providing accurate measurements for Ni

  7. A diagnostic procedure for applying the social-ecological systems framework in diverse cases

    Directory of Open Access Journals (Sweden)

    Jochen Hinkel

    2015-03-01

    Full Text Available The framework for analyzing sustainability of social-ecological systems (SES framework of Elinor Ostrom is a multitier collection of concepts and variables that have proven to be relevant for understanding outcomes in diverse SES. The first tier of this framework includes the concepts resource system (RS and resource units (RU, which are then further characterized through lower tier variables such as clarity of system boundaries and mobility. The long-term goal of framework development is to derive conclusions about which combinations of variables explain outcomes across diverse types of SES. This will only be possible if the concepts and variables of the framework can be made operational unambiguously for the different types of SES, which, however, remains a challenge. Reasons for this are that case studies examine other types of RS than those for which the framework has been developed or consider RS for which different actors obtain different kinds of RU. We explore these difficulties and relate them to antecedent work on common-pool resources and public goods. We propose a diagnostic procedure which resolves some of these difficulties by establishing a sequence of questions that facilitate the step-wise and unambiguous application of the SES framework to a given case. The questions relate to the actors benefiting from the SES, the collective goods involved in the generation of those benefits, and the action situations in which the collective goods are provided and appropriated. We illustrate the diagnostic procedure for four case studies in the context of irrigated agriculture in New Mexico, common property meadows in the Swiss Alps, recreational fishery in Germany, and energy regions in Austria. We conclude that the current SES framework has limitations when applied to complex, multiuse SES, because it does not sufficiently capture the actor interdependencies introduced through RS and RU characteristics and dynamics.

  8. A study on issues and solutions about logic functions applied to computerized procedure system in APR1400

    International Nuclear Information System (INIS)

    Plant procedures are instructions to guide operators in monitoring, decision making, and controlling nuclear power plants(NPPs). While plant procedures historically have been paper-based, computerized-based procedures are being developed to overcome the drawbacks of paper-based procedures. The CPS is used to integrate the operational instructions, plan process information and computerized operator support functions in advanced main control room (MCR) of APR1400. The logic functions applied to CPS are three categories: Procedure Entry Condition (PEC), Continuously Applied Step (CAS) re-execution condition, instruction evaluation. This automation using logic function can reduce operator 2019's burden about monitoring, decision making, controlling NPPs but this automation can also lower situation awareness and operator in the loop. These automation issues caused by logic function shall be closed to apply computerized procedure. This paper introduces issues drawn by system designer, plant operators, and human factors engineers. To close these introduced issues, this paper suggests solutions from system design, procedure writer's guideline, and operator training. This paper introduces issues drawn by system designer, plant operators, and human factors engineers. To close these introduced issues, this paper suggests solutions from system design, procedure writer's guideline, and operator training

  9. Possibilities for applying gamma-spectrometry software ANGLE in isotope hydrology analytical procedures

    International Nuclear Information System (INIS)

    ANGLE software for gamma-spectrometry (semiconductor detector gamma-efficiency calculations, in particular) in its various forms has been in use for 15 years in numerous gamma-spectrometry laboratories all around. ANGLE is basically a semi-empirical model for efficiency calculations, which combines the advantages of both absolute and relative approach, while attempting to minimize their drawbacks. The physical model behind is the concept of the effective solid angle, which is calculated upon the input data on the geometrical and physical characteristics of (1) the source (including the container vessel), (2) the detector and (3) the counting arrangement (including intercepting layers between the latter two). It was shown earlier that only the simultaneous differential treatment of gamma-attenuation, geometry and detector response, as in ANGLE, is essentially justified for this type of calculations. The program can be applied to practically all counting situations encountered in laboratory practice: point, disc, cylindrical or Marinelli sources and any matrix composition. No standards are required, but a so called 'reference efficiency curve' should be obtained ('once for ever') by measuring a set of calibrated point sources. As a summary, ANGLE is characterized by (1) a very broad application range, (2) satisfactory accuracy (of the order of a few percent), (3) easy data manipulation (under WINDOWS), (4) short computation times, (5) flexibility in respect with input parameters and (6) suitability for didactical purposes. Possibilities for applying ANGLE in isotope hydrology analytical procedures (e.g. with radioactive tracers involved), are discussed. Tracer techniques are of particular interest for water resources management in Mediterranean karstic regions, typically abundant with precipitation in winter, but scarce with ground waters in summer - like is the case with east coast of Adriatic, including the coastal part of Montenegro. (author)

  10. GRUKON - A package of applied computer programs system input and operating procedures of functional modules

    International Nuclear Information System (INIS)

    This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)

  11. 20 CFR 667.650 - What procedures apply to the appeals of the Governor's imposition of sanctions for substantial...

    Science.gov (United States)

    2010-04-01

    ... Governor's imposition of sanctions for substantial violations or performance failures by a local area? 667... State Appeals Processes § 667.650 What procedures apply to the appeals of the Governor's imposition of... been found in substantial violation of WIA title I, and has received notice from the Governor...

  12. Neutron resonance averaging

    International Nuclear Information System (INIS)

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  13. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  14. Procedures of creep-fatigue life evaluation applied to inelastic design analysis

    International Nuclear Information System (INIS)

    The application of the inelastic analysis to the structural design is one of the great concerns to the breeder reactor development. From such situation, the investigations on the inelastic constitutive equations have been activated, and various kind of new constitutive models have been proposed. Although ASME Code Case N-47 provides general guidelines for damage evaluation, they do not seem necessarily adequate on the following points. (i) Underestimation of fatigue damage under non-proportional loading. (ii) Overestimation of creep damage for compressive stress. In order to improve the former problem, the authors propose a procedure for uniaxialization of the history of multi-axial strain components. The latter problem can be improved by using the new creep rupture criterion proposed by ORNL. These procedures were coded as the post-processor and linked to MARC program with the new constitutive model described above. This paper introduces the features of the procedures. (orig.)

  15. Calculation of the information content of retrieval procedures applied to mass spectral data bases

    NARCIS (Netherlands)

    Marlen, G. van; Dijkstra, Auke; Klooster, H.A. van 't

    1979-01-01

    A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity o

  16. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  17. Sequential procedure for the design of checklists applied to the patient safety

    Directory of Open Access Journals (Sweden)

    Pardal-Refoyo JL

    2014-07-01

    Full Text Available Introduction: Checklists are cognitive mnemonic aid to guide in performing complex tasks under stress or fatigue, reduce errors of omission and identify critical incidents function. There is a lack of specific methodological aid for their processing. Objective: The aim of the study was to design a structured development of checklists applied to patient safety process (PS. Material and methods: Systematic review. Ten papers were selected, five related to the structure of the checklists, three related to PS research methods (root cause analysis -RCA- and failure mode and effects analysis -FMEA-, one related to construction indicators and one with consensus methods. Results: A sequential process in 15 steps was designed to help the development of LV applied to the SP collecting elements proposed in the literature reviewed. Conclusions: The development of LV SP applied to a particular process should follow a sequential model which includes the literature review, the ACR and FMEA methods and consensus.

  18. Quaternion Averaging

    Science.gov (United States)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  19. The Effect of Applying Direct Observation of Procedural Skills (DOPS) on Nursing Students’ Clinical Skills: A Randomized Clinical Trial

    OpenAIRE

    Hengameh, Habibi; Afsaneh, Raiesifar; Morteza, Khaghanizade; Hosein, Mahmudi; Marjan, Seyed Mazhari; Ebadi, Abbas

    2015-01-01

    Background: Remarkable advances in educational measurement have proved need to the implementation of modern and appropriate methods of clinical evaluation. This study was carried out to compare the effect of applying direct observation procedural skills and routine evaluation method on clinical skills of nursing students. Methods: This randomized clinical trial was conducted on students of Nursing Army College, Tehran, Iran. After obtaining approval from the Ethics Committee of the Baqiyatall...

  20. A procedure for applying gradient pulses in imaging experiments using a nuclear magnetic resonance machine

    International Nuclear Information System (INIS)

    A computerized method for applying gradient pulses in NMR imaging is proposed, especially for medical diagnosis, although application to industrial nondestructive testing is feasible. The method overcomes drawbacks associated with the difficulties involved in programming gradient reels, which often preclude the use of oblique images to the detriment of the quality of diagnosis

  1. Spatial Data Quality Control Procedure applied to the Okavango Basin Information System

    Science.gov (United States)

    Butchart-Kuhlmann, Daniel

    2014-05-01

    Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.

  2. PhysioSoft – An Approach in Applying Computer Technology in Biofeedback Procedures

    OpenAIRE

    Havelka, Mladen; HAVELKA, Juraj; Delimar, Marko

    2009-01-01

    The paper presents description of original biofeedback computer program called PhysioSoft. It has been designed on the basis of the experience in development of biofeedback techniques of interdisciplinary team of experts of the Department of Health Psychology of the University of Applied Health Studies, Faculty of Electrical Engineering and Computing, University of Zagreb, and »Mens Sana«, Private Biofeedback Practice in Zagreb. The interest in the possibility of producing direct and voluntar...

  3. Current LC-MS methods and procedures applied to the identification of new steroid metabolites.

    Science.gov (United States)

    Marcos, Josep; Pozo, Oscar J

    2016-09-01

    The study of the metabolism of steroids has a long history; from the first characterizations of the major metabolites of steroidal hormones in the pre-chromatographic era, to the latest discoveries of new forms of excretions. The introduction of mass spectrometers coupled to gas chromatography at the end of the 1960's represented a major breakthrough for the elucidation of new metabolites. In the last two decades, this technique is being complemented by the use of liquid chromatography-mass spectrometry (LC-MS). In addition of becoming fundamental in clinical steroid determinations due to its excellent specificity, throughput and sensitivity, LC-MS has emerged as an exceptional tool for the discovery of new steroid metabolites. The aim of the present review is to provide an overview of the current LC-MS procedures used in the quest of novel metabolic products of steroidal hormones and exogenous steroids. Several aspects regarding LC separations are first outlined, followed by a description of the key processes that take place in the mass spectrometric analysis, i.e. the ionization of the steroids in the source and the fragmentation of the selected precursor ions in the collision cell. The different analyzers and approaches employed together with representative examples of each of them are described. Special emphasis is placed on triple quadrupole analyzers (LC-MS/MS), since they are the most commonly employed. Examples on the use of precursor ion scan, neutral loss scan and theoretical selected reaction monitoring strategies are also explained. PMID:26709140

  4. HIGH QUALITY ENVIRONMENTAL PRINCIPLES APPLIED TO THE ARCHITECTONIC DESIGN SELECTION PROCEDURE: THE NUTRE LAB CASE

    Directory of Open Access Journals (Sweden)

    Claudia Barroso Krause

    2012-06-01

    Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.

  5. Implementation of procedures for kilovoltage evaluation applied to dental X ray system

    International Nuclear Information System (INIS)

    In this work measurements were done in order to evaluate the accuracy and the precision of the voltage applied to a X rays tube, as well as its variation with distance. A dental X ray system with nominal voltage of 70 kV was used, and a portable kV digital measurer calibrated by the IEE/USP was also utilized. The kV obtained results presented a variation of 9.7% in accuracy and 1.6% in the precision. The results obtained for the distance variation showed only 0.6% of deviation, considering the kVp values obtained. The results are in accordance with the minimum values recommended by Portaria Federal 453 from the Ministerio da Saude. (author)

  6. Porous chitosan scaffold cross-linked by chemical and natural procedure applied to investigate cell regeneration

    International Nuclear Information System (INIS)

    Highlights: ► Polymeric scaffolds, made from chitosan-based films fixed by chemical (citrate) or natural method (genipin), were developed. ► Nano-indentation with a constant harmonic frequency was applied on porous scaffolds to explore their surface mechanics. ► The relationship between surface mechanical property and cell-surface interactions of scaffold materials was demonstrated. ► Porous scaffolds cross-linked by genipin showed adequate cell affinity, non-toxicity, and suitable mechanical properties. - Abstract: Porous chitosan scaffold is used for tissue engineering and drug delivery, but is limited as a scaffold material due to its mechanical weakness, which restrains cell adhesion on the surface. In this study, a chemical reagent (citrate) and a natural reagent (genipin) are used as cross-linkers for the formation of chitosan-based films. Nanoindentation technique with a continuous stiffness measurement system is particularly applied on the porous scaffold surface to examine the characteristic modulus and nanohardness of a porous scaffold surface. The characteristic modulus of a genipin-cross-linked chitosan surface is ≈2.325 GPa, which is significantly higher than that of an uncross-linked one (≈1.292 GPa). The cell-scaffold surface interaction is assessed. The cell morphology and results of an MTS assay of 3T3-fibroblast cells of a genipin-cross-linked chitosan surface indicate that the enhancement of mechanical properties induced cell adhesion and proliferation on the modified porous scaffold surface. The pore size and mechanical properties of porous chitosan film can be tuned for specific applications such as tissue regeneration.

  7. Average Interest

    OpenAIRE

    George Chacko; Sanjiv Ranjan Das

    1997-01-01

    We develop analytic pricing models for options on averages by means of a state-space expansion method. These models augment the class of Asian options to markets where the underlying traded variable follows a mean-reverting process. The approach builds from the digital Asian option on the average and enables pricing of standard Asian calls and puts, caps and floors, as well as other exotica. The models may be used (i) to hedge long period interest rate risk cheaply, (ii) to hedge event risk (...

  8. Applying radiation safety standards in diagnostic radiology and interventional procedures using x rays

    International Nuclear Information System (INIS)

    The International Basic Safety Standards for Protection against Ionizing Radiation and for the Safety of Radiation Sources (BSS) cover the application of ionizing radiation for all practices and interventions and are, therefore, basic and general in nature. Users of radiation sources have to apply those basic requirements to their own particular practices. That requires a degree of 'interpretation' by the user, which can result in varying levels of regulatory compliance and inconsistencies between applications of the BSS to similar practices. In this context, the Preamble of the BSS states that: 'The [regulatory body] may need to provide guidance on how certain regulatory requirements are to be fulfilled for various practices, for example in regulatory guideline documents.' In order to guide the user to achieve a good standard of protection and to achieve a consistent national approach to licensing and inspection, some countries have developed practice specific regulatory guidance, while others have practice specific regulations. National regulatory guidance is tailored to a country's own legislation and regulations for obvious reasons. This can lead to problems if the guidance is used in other States without appropriate modification to take local requirements into account. There would appear, therefore, to be scope for producing internationally harmonized guidance, while bearing in mind that the ultimate responsibility for the regulatory documents rests with the State. Some regions have taken the initiative of preparing guidance to facilitate the regional harmonization of regulatory control of certain common practices (e.g. radiology). In particular, it is felt that States participating in the IAEA's technical cooperation Model Project on Upgrading Radiation and Waste Safety Infrastructure would benefit significantly from the availability of practice specific guidance. Member States could then more readily develop their own guidance tailored to their own

  9. Offshore wind farm siting procedures applied offshore of Block Island, Rhode Island

    Science.gov (United States)

    O'Reilly, Christopher M.

    land. The REZ area is chosen as test site for the algorithm, and an optimal layout for the 5 turbines is found and discussed. Similarly the FAA tool is applied to the Block Island airport demonstrating the complexity of the FAA exclusionary area, and defining the limits of the exclusionary areas. The FAA regulation model is a geometric model in which all major (FAA) regulations within RI and the RI topography are embedded. The user specifies the dimension of the proposed turbines and an airport of interest, and a map of exclusionary zones specific to the turbine height and rules applying to the airport is generated. The model is validated for the entire state of Rhode Island. The micro-siting model finds the optimum placement of each turbine for a given number of turbines within an area. It includes the aerodynamic constraints (loss in wind speed within the wake of a turbine) associated to the deployment of arrays of turbines and the cable interconnection cost. It is combined with the technical, ecological, and social constraints used in the RIOSAMP macro-siting tool to provide a comprehensive micro-siting tool. In the optimization algorithm, a simple wake model and turbine-clustering algorithm are combined with the WIFSI in an objective function; the objective function is optimized with a genetic algorithm (GA).

  10. New interpretive procedure for whole rock U-Pb systems applied to the Vredefort crustal profile

    International Nuclear Information System (INIS)

    Granulite grade Precambrian gneisses have usually undergone at least one period of strong U depletion. While rock U-Pb isotope studies can determine the time(s) of the severe depletion, and this work attempts to place such studies on a more rigorous footing. Two-stage U-Pb systems can be described in terms of one major, episodic differentiation into rocks with varying U/Pb ratios, while three-stage systems can be described by two such distinct episodes. Most of the Precambrian granulites that have been isotopically analyzed have histories too complex to be described as two-stage systems. However, it is demonstrated here that U-Pb data on whole rock suites can yield the complete U-Pb chemical history of a three-stage system (in terms of U/Pb ratios). For a suite of granulites, present-day 207Pb/204Pb and 206Pb/204Pb ratios and element concentration data allow these ratios to be calculated at a number of specific past times and plotted as an array. The degree of scatter in each of these 'past arrays' is graphed as a function of time. The point of least scatter denotes the age of the end of stage 2 in the history of the system. The array slope and the dating of the end of stage 2 also permit the beginning of stage 2 to be calculated. All other parameters in the system (U and Pb concentrations, Pb isotopic ratios) can now be determined for each individual rock throughout its history. The new interpretive method also distinguishes sensitively among various kindds of uranium fractionation which may have operated during the differentiation episodes. It is applied here to uranium-depleted granulites in the deeper part of the Vredefort crustal profile. The times of the two fractionating episodes are calculated at approx.3860 and approx.2760 m.y., respectively

  11. Radiochromic film for dosimetric measurements in radiation shielding composites synthesized for applied in radiology procedures of high dose

    Energy Technology Data Exchange (ETDEWEB)

    Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)

  12. 40 CFR Appendix B to Part 76 - Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers

    Science.gov (United States)

    2010-07-01

    ... the cost in constant dollars of low NOX burner technology applied to Group 1, Phase I boilers. The... separated overfire air as applied to tangentially fired boilers, in lieu of low NOX burner technology for.... Average Capital Cost for Low NOX Burner Technology Applied to Group 1 Boilers The Administrator will...

  13. The safety assessment of OPR-1000 nuclear power plant for station blackout accident applying the combined deterministic and probabilistic procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dong Gu, E-mail: littlewing@kins.re.kr [Korea Institute of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2014-08-15

    Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.

  14. The safety assessment of OPR-1000 nuclear power plant for station blackout accident applying the combined deterministic and probabilistic procedure

    International Nuclear Information System (INIS)

    Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin

  15. A Methods and procedures to apply probabilistic safety Assessment (PSA) techniques to the cobalt-therapy process. Cuban experience

    International Nuclear Information System (INIS)

    This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a

  16. Creating an Implicit Measure of Cognition More Suited to Applied Research: A Test of the Mixed Trial-Implicit Relational Assessment Procedure (MT-IRAP)

    Science.gov (United States)

    Levin, Michael E.; Hayes, Steven C.; Waltz, Thomas

    2010-01-01

    The Implicit Relational Assessment Procedure (IRAP) is a promising tool for measuring implicit cognitions in applied research. However, the need for training and block effects can limit its capacity to assess effects with individual stimuli and participants, both of which are important for applied research. We developed a modified IRAP, the Mixed…

  17. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    OpenAIRE

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to...

  18. Great improvements in quality and safety: a structured approach in procedure automation applied to Severe Accident Management guidelines

    International Nuclear Information System (INIS)

    Many efforts are made to develop methodologies and tools that can give a great improvement in quality and safety, which are especially important in critical fields like the Severe Accident Management. This paper presents the study to verify the feasibility of the application of a structured approach in procedure automation, as the one offered by DIAM package, to a sample of SAMG procedures, with the final goal to assess the possibility to automate the SAMG procedure management. This study has been carried out using, as input, generic SAMGs recommended by a reactor vendor and, as a computerized tool, DIAM. DIAM is a methodology and tool to represent procedures, that offers many automatisms during the editing phase, allowing to generate procedures always aligned to the standard. For the operation phase, it offers operator guide support and allows procedure automation, when the detail level to which procedures are represented is appropriate and DIAM is properly connected to the field or to a simulator. In this feasibility study the possibility to utilize structured approaches to represent and use SAMGs has been verified. The two guidelines, object of the study, have been formally restructured and reformatted and then, implemented according to DIAM methodology. They can be, now, used either with the guidance of DIAM Mate (operator guide module of DIAM) or in a traditional way using the output of DIAM Doc (contained in a separate document). After a description of DIAM (Development and Implementation of Accident Management procedures) metaldehyde and graphical representation, this paper illustrates how DIAM has been successfully used in this sample application, describing the analyses and work required for the transition to the original version of the SAMGs to the restructured one. The paper contains only some meaningful samples of the original SAMGs and of their new format, as much as needed to understand the methodology and to perform comparisons. The main results are

  19. Comparison of the radiochemical separation procedures od plutonium applied for its determination in the environmental samples using alpha spectrometry

    International Nuclear Information System (INIS)

    Alpha spectrometry of the plutonium isotopes can be performed only after the perfect plutonium separation from other components of the matrix. So, till now numerous procedures have been elaborated and tested. The communication presents comparison of the plutonium content determination in soil, bones, eggshells and in the reference materials obtained by alpha spectrometry combined with two different separation procedures. The samples were mineralized in the concentrated HCl or HF prior to plutonium electrodeposition or coprecipitation with NdF3. Some other details were also tested in various variants. Quality of the spectra is discussed in terms of all these pre-treatment methods

  20. Virtual reality, augmented reality, and robotics applied to digestive operative procedures: from in vivo animal preclinical studies to clinical use

    Science.gov (United States)

    Soler, Luc; Marescaux, Jacques

    2006-04-01

    Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.

  1. A Unique Procedure to Identify Cell Surface Markers Through a Spherical Self-Organizing Map Applied to DNA Microarray Analysis

    Science.gov (United States)

    Sugii, Yuh; Kasai, Tomonari; Ikeda, Masashi; Vaidyanath, Arun; Kumon, Kazuki; Mizutani, Akifumi; Seno, Akimasa; Tokutaka, Heizo; Kudoh, Takayuki; Seno, Masaharu

    2016-01-01

    To identify cell-specific markers, we designed a DNA microarray platform with oligonucleotide probes for human membrane-anchored proteins. Human glioma cell lines were analyzed using microarray and compared with normal and fetal brain tissues. For the microarray analysis, we employed a spherical self-organizing map, which is a clustering method suitable for the conversion of multidimensional data into two-dimensional data and displays the relationship on a spherical surface. Based on the gene expression profile, the cell surface characteristics were successfully mirrored onto the spherical surface, thereby distinguishing normal brain tissue from the disease model based on the strength of gene expression. The clustered glioma-specific genes were further analyzed by polymerase chain reaction procedure and immunocytochemical staining of glioma cells. Our platform and the following procedure were successfully demonstrated to categorize the genes coding for cell surface proteins that are specific to glioma cells. Our assessment demonstrates that a spherical self-organizing map is a valuable tool for distinguishing cell surface markers and can be employed in marker discovery studies for the treatment of cancer. PMID:26966393

  2. Parametric analysis applied to perforating procedures of oil wells; Analise parametrica aplicada a procedimentos de canhoneio de pocos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Baioco, Juliana Souza; Seckler, Carolina dos Santos; Silva, Karinna Freitas da; Jacob, Breno Pinheiro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Metodos Computacionais e Sistemas Offshore; Silvestre, Jose Roberto; Soares, Antonio Claudio; Freitas, Sergio Murilo Santos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas

    2008-07-01

    The perforation process is an important in well construction. It provides contact between the reservoir rock and the well, allowing oil production. The procedure consists in using explosive charges to bore a hole into the casing and the rock, so that the reservoir fluid can flow to the well. Therefore, the right choice of both the gun and the charge type is extremely important, knowing that many factors influence on the process, affecting the productivity, such as shot density, penetration depth, hole diameter, etc. The objective of this paper is to present the results of some parametric study to evaluate the influence of some parameters related to the explosive charges on well productivity, since there are many types of charges with different properties, which provide specific characteristics to the perforated area. For that purpose, a commercial program will be used, which allows the simulation of the flow problem, along with a finite element mesh generator that uses a pre-processor and a program that enables the construction of reservoir, well and perforation models. It can be observed that the penetration depth has bigger influence than the hole diameter, being an important factor when choosing the charge to be used in the project. (author)

  3. TOXICITY CHARACTERISTIC LEACHING PROCEDURE APPLIED TO RADIOACTIVE SALTSTONE CONTAINING TETRAPHENYLBORATE: DEVELOPMENT OF A MODIFIED ZERO-HEADSPACE EXTRACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Crapse, K.; Cozzi, A.; Crawford, C.; Jurgensen, A.

    2006-09-30

    In order to assess the effect of extended curing times at elevated temperatures on saltstone containing Tank 48H waste, saltstone samples prepared as a part of a separate study were analyzed for benzene using a modification of the United States Environmental Protection Agency (USEPA) method 1311 Toxicity Characteristic Leaching Procedure (TCLP). To carry out TCLP for volatile organic analytes (VOA), such as benzene, in the Savannah River National Laboratory (SRNL) shielded cells (SC), a modified TCLP Zero-Headspace Extractor (ZHE) was developed. The modified method was demonstrated to be acceptable in a side by side comparison with an EPA recommended ZHE using nonradioactive saltstone containing tetraphenylborate (TPB). TCLP results for all saltstone samples tested containing TPB (both simulant and actual Tank 48H waste) were below the regulatory limit for benzene (0.5 mg/L). In general, higher curing temperatures corresponded to higher concentrations of benzene in TCLP extract. The TCLP performed on the simulant samples cured under the most extreme conditions (3000 mg/L TPB in salt and cured at 95 C for at least 144 days) resulted in benzene values that were greater than half the regulatory limit. Taking into account that benzene in TCLP extract was measured on the same order of magnitude as the regulatory limit, that these experimental conditions may not be representative of actual curing profiles found in the saltstone vault and that there is significant uncertainty associated with the precision of the method, it is recommended that to increase confidence in TCLP results for benzene, the maximum curing temperature of saltstone be less than 95 C. At this time, no further benzene TCLP testing is warranted. Additional verification would be recommended, however, should future processing strategies result in significant changes to salt waste composition in saltstone as factors beyond the scope of this limited study may influence the decomposition of TPB in saltstone.

  4. On T-matrix averaging

    International Nuclear Information System (INIS)

    The T-matrix averaging procedure advocated by Burke, Berrington and Sukumar [1981, J. Phys. B. At. Mol. Phys. 14, 289] is demonstrated to hold in a class of soluble models for two different L2 basis expansions. The convergence rates as the bases are extended to completeness are determined. (author)

  5. Applying 'Technology Assessment' and 'Evidence Based Medicine' theory to interventional radiology. Part 1: Suggestions for the phased evaluation of new procedures

    International Nuclear Information System (INIS)

    AIM: To compare and contrast interventional radiology (IR) clinical and research practices with the technology assessment and evidence-based medicine (EBM) paradigms and make suggestions for the phased evaluation of new IR procedures. MATERIALS AND METHODS: Course literature of the Association of University Radiologists' 'Basic Technology Assessment for Radiologists' course and the McMaster University Health Information Research Unit's 'How to Teach Evidence-Based Medicine 1999' course were used to identify major publications in each discipline. A computer search was performed to seek other relevant literature. A model of traditional development of IR procedures was developed. Suggestions for the phased evaluation of IR procedures were derived. RESULTS: As in diagnostic radiology, several levels of progressively stronger IR study design can be described and related to EBM 'levels of evidence'. These range from case reports and case series through case-control and cohort studies to randomized controlled trials (RCTs). The major weakness in the existing IR literature is the predominance of small, uncontrolled, case series. Randomized controlled trials are likely to provide the best possible evidence of effectiveness. They are expensive and randomization is sometimes unethical or impractical. Case-control and cohort studies have been under-utilized. Evidence-based medicine indices of benefit and harm have not yet been applied in IR and may have clinical advantages over traditional statistical methods. A literature search (10 years) using MeSH terms 'radiology, interventional' and 'efficacy' yielded 30 papers. Combining 'radiology, interventional' and 'evidence-based medicine' yielded no papers. Comparative searches substituting the term 'diagnostic imaging' for 'radiology, interventional' yielded 4883 and 62 papers, respectively. CONCLUSION: Principles of technology assessment and EBM can be applied to the investigation of new IR procedures. A tool is needed to

  6. Quantization Procedures

    International Nuclear Information System (INIS)

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

  7. Averaging anisotropic cosmologies

    International Nuclear Information System (INIS)

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of anisotropic pressure-free models. Adopting the Buchert scheme, we recast the averaged scalar equations in Bianchi-type form and close the standard system by introducing a propagation formula for the average shear magnitude. We then investigate the evolution of anisotropic average vacuum models and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. The presence of nonzero average shear in our equations also allows us to examine the constraints that a phase of backreaction-driven accelerated expansion might put on the anisotropy of the averaged domain. We close by assessing the status of these and other attempts to define and calculate 'average' spacetime behaviour in general relativity

  8. Average-energy games

    OpenAIRE

    Bouyer, Patricia; Markey, Nicolas; Randour, Mickael; Larsen, Kim G.; Laursen, Simon

    2015-01-01

    Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this ...

  9. On the way towards a generalized entropy maximization procedure

    OpenAIRE

    Bagci, G. Baris; Tirnakli, Ugur

    2008-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q is between [0,1] in contrast to the stationary distribution of the inverse power law ob...

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  11. Average Angular Velocity

    OpenAIRE

    Van Essen, H.

    2004-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to th...

  12. On the Averaging Principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and interchangibility is O(\\epsilon^2) equivalent to the outcome of the corresponding homogeneous model, where \\epsilon is the level of heterogeneity. We then use this averaging pr...

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  14. Averaged extreme regression quantile

    OpenAIRE

    Jureckova, Jana

    2015-01-01

    Various events in the nature, economics and in other areas force us to combine the study of extremes with regression and other methods. A useful tool for reducing the role of nuisance regression, while we are interested in the shape or tails of the basic distribution, is provided by the averaged regression quantile and namely by the average extreme regression quantile. Both are weighted means of regression quantile components, with weights depending on the regressors. Our primary interest is ...

  15. Gauge-Invariant Average of Einstein Equations for finite Volumes

    CERN Document Server

    Smirnov, Juri

    2014-01-01

    For the study of cosmological backreacktion an avaragng procedure is required. In this work a covariant and gauge invariant averaging formalism for finite volumes will be developed. This averaging will be applied to the scalar parts of Einstein's equations. For this purpose dust as a physical laboratory will be coupled to the gravitating system. The goal is to study the deviation from the homogeneous universe and the impact of this deviation on the dynamics of our universe. Fields of physical observers are included in the studied system and used to construct a reference frame to perform the averaging without a formal gauge fixing. The derived equations resolve the question whether backreaction is gauge dependent.

  16. Correlations between the most developed (G7) countries. A moving average window size optimisation

    CERN Document Server

    Miskiewicz, J; Miskiewicz, Janusz; Ausloos, Marcel

    2005-01-01

    Different distance matrices are defined and applied to look for correlations between the gross domestic product of G7 countries. Results are illustrated through displays obtained from various graph methods. Significant similarities between results are obtained. A procedure for choosing the best distance function is proposed taking into account the size of the window in which correlation are averaged.

  17. Correlations Between the Most Developed (G7) Countries. A Moving Average Window Size Optimisation

    International Nuclear Information System (INIS)

    Different distance matrices are defined and applied to look for correlations between the gross domestic product of G7 countries. Results are illustrated through displays obtained from various graph methods. Significant similarities between results are obtained. A procedure for choosing the best distance function is proposed taking into account the size of the window in which correlations are averaged. (author)

  18. Averaging anisotropic cosmologies

    CERN Document Server

    Barrow, J D; Barrow, John D.; Tsagas, Christos G.

    2006-01-01

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of pressure-free Bianchi-type models. Adopting the Buchert averaging scheme, we identify the kinematic backreaction effects by focussing on spacetimes with zero or isotropic spatial curvature. This allows us to close the system of the standard scalar formulae with a propagation equation for the shear magnitude. We find no change in the already known conditions for accelerated expansion. The backreaction terms are expressed as algebraic relations between the mean-square fluctuations of the models' irreducible kinematical variables. Based on these we investigate the early evolution of averaged vacuum Bianchi type $I$ universes and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. We also discuss the possibility of accelerated expansion due to ...

  19. Average Angular Velocity

    CERN Document Server

    Essén, H

    2003-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.

  20. On sparsity averaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2013-01-01

    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.

  1. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT; Procedimientos de Control de Calildad de las Camaras de Muones del Experimento CMS Construidas en el CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-07-01

    In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.

  2. Extending Research on the Interspersal Procedure to Perceptions of Continuous Reading Assignments: Applied and Theoretical Implications of a Failure To Replicate.

    Science.gov (United States)

    Martin, Jill J.; Skinner, Christopher H.; Neddenriep, Christine E.

    2001-01-01

    Investigates the impact of the interspersal procedure on students' reading tasks. Students read out loud two passages, with one containing additional interspersed brief, easy paragraphs. Students selected the passage that would require the least effort and the least time to read. Results suggest that the interspersal procedure may improve…

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong to...

  4. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  5. The averaging principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of \\emph{differentiability} and \\emph{interchangibility}, is $O(\\epsilon^2)$ equivalent to the outcome of the corresponding homogeneous model, where $\\epsilon$ is the level of heterogeneity. We then us...

  6. A novel approach for the averaging of magnetocardiographically recorded heart beats

    Energy Technology Data Exchange (ETDEWEB)

    DiPietroPaolo, D [Advanced Technologies Biomagnetics, Pescara (Italy); Mueller, H-P [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany); Erne, S N [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany)

    2005-05-21

    Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.

  7. Robust Averaging Level Control

    OpenAIRE

    Rosander, Peter; Isaksson, Alf; Löfberg, Johan; Forsman, Krister

    2011-01-01

    Frequent inlet flow changes typically cause problems for averaging level controllers. For a frequently changing inlet flow the upsets do not occur when the system is in steady state and the tank level at its set-point. For this reason the tuning of the level controller gets quite complicated, since not only the size of the upsets but also the time in between them relative to the hold up of the tank have to be considered. One way to obtain optimal flow filtering while directly accounting for futur...

  8. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  9. Basics of averaging of the Maxwell equations

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2011-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for metamaterials, which is rather close to the case of compound materials but should include magnetic response of the inclusi...

  10. Some applications of stochastic averaging method for quasi Hamiltonian systems in physics

    Institute of Scientific and Technical Information of China (English)

    DENG MaoLin; ZHU WeiQiu

    2009-01-01

    Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for uasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics. In the present paper, the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced. The applications of the stochastic averaging method in studying the dynamics of active Brownian particles, the reaction rate theory, the dynamics of breathing and denaturation of DNA, and the Fermi resonance and its effect on the mean transition time are re-viewed.

  11. Some applications of stochastic averaging method for quasi Hamiltonian systems in physics

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics.In the present paper,the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced.The applications of the stochastic averaging method in studying the dynamics of active Brownian particles,the reaction rate theory,the dynamics of breathing and denaturation of DNA,and the Fermi resonance and its effect on the mean transition time are reviewed.

  12. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  13. Average nuclear surface properties

    International Nuclear Information System (INIS)

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  14. When the article 475-j of Civil Procedure Code penalty of 10% is applied? note about the Superior Court of Justice precedent number 517

    Directory of Open Access Journals (Sweden)

    Felipe Scalabrin

    2015-06-01

    Full Text Available This article is aimed to clarify the doubts about the Civil Procedure Code article 475-J and its moment of incidence, mainly considering the approval of the precedent nº 517 of the Superior Court of Justice.

  15. Microscale Procedure for Inorganic Qualitative Analysis with Emphasis on Writing Equations: Chemical Fingerprinting Applied to the "n"-bottle Problem of Matching Samples with Their Formulas

    Science.gov (United States)

    Sattsangi, Prem D.

    2014-01-01

    A laboratory method for teaching inorganic qualitative analysis and chemical equations is described. The experiment has been designed to focus attention on cations and anions that react to form products. This leads to a logical approach to understand and write chemical equations. The procedure uses 3 mL plastic micropipettes to store and deliver…

  16. Office of Inspector General report on Naval Petroleum Reserve Number 1, independent accountant`s report on applying agreed-upon procedures

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordance with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.

  17. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single...

  18. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  19. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  20. Simple non-empirical procedure for spin-component-scaled MP2 methods applied to the calculation of dissociation energy curve of noncovalently-interacting systems

    CERN Document Server

    Grabowski, I; Della Sala, F

    2013-01-01

    We present a simple and non-empirical method to determine optimal scaling coefficients, within the (spin-component)-scaled MP2 approach, for calculating intermolecular potential energies of noncovalently-interacting systems. The method is based on an observed proportionality between (spin-component) MP2 and CCSD(T) energies for a wide range of intermolecular distances and allows to compute with high accuracy a large portion of the dissociation curve at the cost of a single CCSD(T) calculation. The accuracy of the present procedure is assessed for a series of noncovalently-interacting test systems: the obtained results reproduce CCSD(T) quality in all cases and definitely outperform conventional MP2, CCSD and SCS-MP2 results. The difficult case of the Beryllium dimer is also considered.

  1. Applying 'Evidence-Based Medicine' Theory to Interventional Radiology.Part 2: A Spreadsheet for Swift Assessment of Procedural Benefit and Harm

    International Nuclear Information System (INIS)

    AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft ExcelTMwas used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS ExcelTMversion can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E

  2. Basics of averaging of the Maxwell equations for bulk materials

    OpenAIRE

    Chipouline, A.; Simovski, C.; Tretyakov, S.

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some b...

  3. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  4. Average Shape of Transport-Limited Aggregates

    Science.gov (United States)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  5. Multigrid solution for the compressible Euler equations by an implicit characteristic-flux-averaging

    Science.gov (United States)

    Kanarachos, A.; Vournas, I.

    A formulation of an implicit characteristic-flux-averaging method for the compressible Euler equations combined with the multigrid method is presented. The method is based on correction scheme and implicit Gudunov type finite volume scheme and is applied to two dimensional cases. Its principal feature is an averaging procedure based on the eigenvalue analysis of the Euler equations by means of which the fluxes are evaluated at the finite volume faces. The performance of the method is demonstrated for different flow problems around RAE-2922 and NACA-0012 airfoils and an internal flow over a circular arc.

  6. A procedure for detailed 3-D analysis applied to temperature rises in multi-layer high-level waste repositories in a salt dome

    International Nuclear Information System (INIS)

    For detailed 3-D thermal analysis of high-level waste repositories a computer program TASTE (Three-dimensional Analysis of Salt dome Temperatures) is under development, based on an analytical model of a continuous time-dependent point source in an infinite solid of homogeneous isotropic material with temperature-independent properties. The program is based on the assumption that the high-level waste will be disposed of in a number of boreholes placed in a square, rectangular or hexagonal pattern in one or more burial layers. Heat generation, borehole pitch and length, burial layer area, relative distance between the layers, loading sequence and loading tempo can be varied arbitrarily. Preliminary versions of the program were applied to establish the influence of the following variables relevant to the temperature rise distribution in a high-level waste burial area in a salt dome: (1) Disposal borehole patterns: it was established that the influence on the temperature is very limited. Hence quite some flexibility is allowed with respect to the disposal pattern to be chosen. (2) Loading tempi: it was established that the maxima of the temperature rises are hardly influenced by differences in loading tempi. For a multi-layer burial configuration an underlying burial area may be judged to give no problems for the disposal operations in an overlying burial area, even with a very slow loading rate. (3) Leaving certain borehole positions unused: not utilizing certain borehole positions has a very positive effect on the reduction of local temperature rises. (author)

  7. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, ΒΘ, is derived. A method for unobtrusively measuring the quantities used to evaluate ΒΘ in Extrap T1 is described. The results if a series of measurements yielding ΒΘ as a function of externally applied toroidal field are presented. (author)

  8. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  9. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  10. A proxy-variable search procedure

    OpenAIRE

    Jaqueson K. Galimberti

    2009-01-01

    This paper proposes a proxy-variable search procedure, based on a sensitivity analysis framework, aiming to provide a useful tool for the applied researcher whenever he faces measurement or proxy-variable uncertainties. Extending from the sensitivity analysis literature it proposes two main methodological innovations. The first relates to the usage of a proxies grouping process to obtain averaged coefficient estimators for theoretical explanatory variables that have more than one possible mea...

  11. Average Range and Network Synchronizability

    International Nuclear Information System (INIS)

    The influence of structural properties of a network on the network synchronizability is studied by introducing a new concept of average range of edges. For both small-world and scale-free networks, the effect of average range on the synchronizability of networks with bounded or unbounded synchronization regions is illustrated through numerical simulations. The relations between average range, range distribution, average distance, and maximum betweenness are also explored, revealing the effects of these factors on the network synchronizability of the small-world and scale-free networks, respectively. (general)

  12. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  13. On the average pairing energy in nuclei

    International Nuclear Information System (INIS)

    The macroscopic-microscopic method is applied to calculate the nuclear energies, especially the microscopic shell and pairing corrections. The single-particle levels are obtained with the Yukawa folded mean-field potential. The macroscopic energy is evaluated using the Lublin-Strasbourg Drop model. The shell corrections are obtained using the Strutinsky method with smoothing in nucleon number space. The average nuclear pairing energy is also determined by folding the BCS sums in nucleon number space. The average pairing energy dependence on the nuclear elongation is investigated. (author)

  14. "Pricing Average Options on Commodities"

    OpenAIRE

    Kenichiro Shiraya; Akihiko Takahashi

    2010-01-01

    This paper proposes a new approximation formula for pricing average options on commodities under a stochastic volatility environment. In particular, it derives an option pricing formula under Heston and an extended lambda-SABR stochastic volatility models (which includes an extended SABR model as a special case). Moreover, numerical examples support the accuracy of the proposed average option pricing formula.

  15. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  16. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  17. Thomson scattering in the average-atom approximation

    OpenAIRE

    Johnson, W. R.; Nilsen, J.; Cheng, K. T.

    2012-01-01

    The average-atom model is applied to study Thomson scattering of x-rays from warm-dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave-functions and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Appli...

  18. UNEMPLOYMENT BENEFIT, MINIMUM WAGE AND AVERAGE SALARY EARNINGS IN ROMANIA

    OpenAIRE

    2012-01-01

    The existence of a long-run equilibrium between average salary earnings and labour market public institutions, such as unemployment benefit and minimum wage, is checked using ARDL bounds testing procedure. The results pointed out that long-run causality runs from average salary earnings to labour market public institutions and not vice versa. The short-run dynamics are depicted as well.

  19. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  20. Average Light Intensity Inside a Photobioreactor

    Directory of Open Access Journals (Sweden)

    Herby Jean

    2011-01-01

    Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.

  1. Power convergence of Abel averages

    OpenAIRE

    Kozitsky, Yuri; Shoikhet, David; Zemanek, Jaroslav

    2012-01-01

    Necessary and sufficient conditions are presented for the Abel averages of discrete and strongly continuous semigroups, $T^k$ and $T_t$, to be power convergent in the operator norm in a complex Banach space. These results cover also the case where $T$ is unbounded and the corresponding Abel average is defined by means of the resolvent of $T$. They complement the classical results by Michael Lin establishing sufficient conditions for the corresponding convergence for a bounded $T$.

  2. High-average-power lasers

    International Nuclear Information System (INIS)

    The goals of the High-Average-Power Laser Program at LLNL are to develop a broad technology base for solid state lasers and to demonstrate high-average-power laser operation with more efficiency and higher beam quality than has been possible with current technology. Major activities are the zig-zag laser testbed and the gas-cooled-slab laser test bed. This section describes these activities as well as discussion of material development; nonlinear optics; laser materials, and applications

  3. Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei

    2016-09-01

    In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.

  4. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  5. Sparsity Averaging for Compressive Imaging

    CERN Document Server

    Carrillo, Rafael E; Van De Ville, Dimitri; Thiran, Jean-Philippe; Wiaux, Yves

    2012-01-01

    We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.

  6. On generalized averaged Gaussian formulas

    Science.gov (United States)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  7. Average neutron detection efficiency for DEMON detectors

    International Nuclear Information System (INIS)

    The neutron detection efficiency of a DEMON detector, averaged over the whole volume, was calculated using GEANT and applied to determine neutron multiplicities in an intermediate heavy ion reaction. When a neutron source is set at a distance of about 1 m from the front surface of the detector, the average efficiency, ϵav, is found to be significantly lower (20–30%) than the efficiency measured at the center of the detector, ϵ0. In the GEANT simulation the ratio R=ϵav/ϵ0 was calculated as a function of neutron energy. The experimental central efficiency multiplied by R was then used to determine the average efficiency. The results were applied to a study of the 64Zn+112Sn reaction at 40 A MeV which employed 16 DEMON detectors. The neutron multiplicity was extracted using a moving source fit. The derived multiplicities are compared well with those determined using the neutron ball in the NIMROD detector array in a separate experiment. Both are in good agreement with multiplicities predicted by a transport model calculation using an antisymmetric molecular dynamics (AMD) model code

  8. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  9. From cellular doses to average lung dose

    International Nuclear Information System (INIS)

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions. (authors)

  10. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  11. Stochastic Approximation with Averaging Innovation

    CERN Document Server

    Laruelle, Sophie

    2010-01-01

    The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.

  12. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  13. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  14. Michel Parameters averages and interpretation

    International Nuclear Information System (INIS)

    The new measurements of Michel parameters in τ decays are combined to world averages. From these measurements model independent limits on non-standard model couplings are derived and interpretations in the framework of specific models are given. A lower limit of 2.5 tan β GeV on the mass of a charged Higgs boson in models with two Higgs doublets can be set and a 229 GeV limit on a right-handed W-boson in left-right symmetric models (95 % c.l.)

  15. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  16. Basics of averaging of the Maxwell equations for bulk materials

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for bulk MM, which is rather close to the case of compound materials but should include magnetic response of the inclusions an...

  17. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  18. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  19. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    Science.gov (United States)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  20. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  1. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  2. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    Science.gov (United States)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  3. Averaging along Uniform Random Integers

    CERN Document Server

    Janvresse, Élise

    2011-01-01

    Motivated by giving a meaning to "The probability that a random integer has initial digit d", we define a URI-set as a random set E of natural integers such that each n>0 belongs to E with probability 1/n, independently of other integers. This enables us to introduce two notions of densities on natural numbers: The URI-density, obtained by averaging along the elements of E, and the local URI-density, which we get by considering the k-th element of E and letting k go to infinity. We prove that the elements of E satisfy Benford's law, both in the sense of URI-density and in the sense of local URI-density. Moreover, if b_1 and b_2 are two multiplicatively independent integers, then the mantissae of a natural number in base b_1 and in base b_2 are independent. Connections of URI-density and local URI-density with other well-known notions of densities are established: Both are stronger than the natural density, and URI-density is equivalent to log-density. We also give a stochastic interpretation, in terms of URI-...

  4. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is...... valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that...

  5. Applied Economics

    OpenAIRE

    Nicita, Alessandro

    2008-01-01

    Price responses are usually estimated for the average household. However, different households are unlikely to respond in a similar way to movement in prices. Consequently, relying on averages may be misleading when examining the behaviour of a particular group of households such as the poor. This article uses six household surveys collected in Mexico between 1989 and 2000 to derive price responses for 10 product groups and for five levels of income households. The estimated price elasticitie...

  6. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  7. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  8. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  9. Thermodynamic properties of average-atom interatomic potentials for alloys

    Science.gov (United States)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  10. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  11. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  12. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  13. 78 FR 10262 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2013-02-13

    ... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY... of average change in railroad productivity for the 2007-2011 (5-year) averaging period....

  14. Applied Electromagnetics

    International Nuclear Information System (INIS)

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  15. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  16. Averaging Methods for Design of Spacecraft Hysteresis Damper

    Directory of Open Access Journals (Sweden)

    Ricardo Gama

    2013-01-01

    Full Text Available This work deals with averaging methods for dynamics of attitude stabilization systems. The operation of passive gravity-gradient attitude stabilization systems involving hysteresis rods is described by discontinuous differential equations. We apply recently developed averaging techniques for discontinuous system in order to simplify its analysis and to perform parameter optimization. The results obtained using this analytic method are compared with those of numerical optimization.

  17. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  18. Whipple Procedure

    Science.gov (United States)

    ... Goggins Lab Sol Goldman Center Discussion Board The Whipple Procedure Also called a pancreaticoduodenectomy, which is generally ... operation was first described by .Dr. Alan O. Whipple of New York Memorial Hospital (now called Memorial ...

  19. A space-averaged model of branched structures

    CERN Document Server

    Lopez, Diego; Michelin, Sébastien

    2014-01-01

    Many biological systems and artificial structures are ramified, and present a high geometric complexity. In this work, we propose a space-averaged model of branched systems for conservation laws. From a one-dimensional description of the system, we show that the space-averaged problem is also one-dimensional, represented by characteristic curves, defined as streamlines of the space-averaged branch directions. The geometric complexity is then captured firstly by the characteristic curves, and secondly by an additional forcing term in the equations. This model is then applied to mass balance in a pipe network and momentum balance in a tree under wind loading.

  20. Applied Stratigraphy

    Science.gov (United States)

    Lucas, Spencer G.

    Stratigraphy is a cornerstone of the Earth sciences. The study of layered rocks, especially their age determination and correlation, which are integral parts of stratigraphy, are key to fields as diverse as geoarchaeology and tectonics. In the Anglophile history of geology, in the early 1800s, the untutored English surveyor William Smith was the first practical stratigrapher, constructing a geological map of England based on his own applied stratigraphy. Smith has, thus, been seen as the first “industrial stratigrapher,” and practical applications of stratigraphy have since been essential to most of the extractive industries from mining to petroleum. Indeed, gasoline is in your automobile because of a tremendous use of applied stratigraphy in oil exploration, especially during the latter half of the twentieth century. Applied stratigraphy, thus, is a subject of broad interest to Earth scientists.

  1. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  2. Applied mineralogy

    Energy Technology Data Exchange (ETDEWEB)

    Park, W.C.; Hausen, D.M.; Hagni, R.D. (eds.)

    1985-01-01

    A conference on applied mineralogy was held and figures were presented under the following headings: methodology (including image analysis); ore genesis; exploration; beneficiations (including precious metals); process mineralogy - low and high temperatures; and medical science applications. Two papers have been abstracted separately.

  3. Averages of Values of L-Series

    OpenAIRE

    Alkan, Emre; Ono, Ken

    2013-01-01

    We obtain an exact formula for the average of values of L-series over two independent odd characters. The average of any positive moment of values at s = 1 is then expressed in terms of finite cotangent sums subject to congruence conditions. As consequences, bounds on such cotangent sums, limit points for the average of first moment of L-series at s = 1 and the average size of positive moments of character sums related to the class number are deduced.

  4. Dynamic speckle texture processing using averaged dimensions

    Science.gov (United States)

    Rabal, Héctor; Arizaga, Ricardo; Cap, Nelly; Trivi, Marcelo; Mavilio Nuñez, Adriana; Fernandez Limia, Margarita

    2006-08-01

    Dynamic speckle or biospeckle is a phenomenon generated by laser light scattering in biological tissues. It is also present in some industrial processes where the surfaces exhibit some kind of activity. There are several methods to characterize the dynamic speckle pattern activity. For quantitative measurements, the Inertia Moment of the co occurrence matrix of the temporal history of the speckle pattern (THSP) is usually used. In this work we propose the use of average dimensions (AD) for quantitative classifications of textures of THSP images corresponding to different stages of the sample. The AD method was tested in an experiment with the drying of paint, a non biological phenomenon that we usually use as dynamic speckle initial test. We have chosen this phenomenon because its activity can be followed in a relatively simple way by gravimetric measures and because its behaviour is rather predictable. Also, the AD was applied to numerically simulated THSP images and the performance was compared with other quantitative method. Experiments with biological samples are currently under development.

  5. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  6. Bayesian Model Averaging in the Instrumental Variable Regression Model

    OpenAIRE

    Gary Koop; Robert Leon Gonzalez; Rodney Strachan

    2011-01-01

    This paper considers the instrumental variable regression model when there is uncertainly about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainly can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very fl...

  7. Finding large average submatrices in high dimensional data

    OpenAIRE

    Shabalin, Andrey A.; Weigman, Victor J.; Perou, Charles M.; Nobel, Andrew B

    2009-01-01

    The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. ...

  8. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  9. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  10. The Animals (Scientific Procedures) (Procedure for Representations) Rules 1986

    OpenAIRE

    Her Majesty's Stationary Office

    1986-01-01

    Section 12 of the Animals (Scientific Procedures) (Procedure for Representations) Rules 1986 to make representations to a legally qualified person appointed by the Secretary of State on a person who has applied for or holds a personal or project licence or a certificate of designation of a scientific procedure, breeding or supplying establishment under that Act where the Secretary of State proposes to refuse such a licence or certificate or to vary or revoke it otherwise than at the re...

  11. Environmental procedures

    International Nuclear Information System (INIS)

    The European Bank has pledged in its Agreement to place environmental management at the forefront of its operations to promote sustainable economic development in central and eastern Europe. The Bank's environmental policy is set out in the document titled, Environmental Management: The Bank's Policy Approach. This document, Environmental Procedures, presents the procedures which the European Bank has adopted to implement this policy approach with respect to its operations. The environmental procedures aim to: ensure that throughout the project approval process, those in positions of responsibility for approving projects are aware of the environmental implications of the project, and can take these into account when making decisions; avoid potential liabilities that could undermine the success of a project for its sponsors and the Bank; ensure that environmental costs are estimated along with other costs and liabilities; and identify opportunities for environmental enhancement associated with projects. The review of environmental aspects of projects is conducted by many Bank staff members throughout the project's life. This document defines the responsibilities of the people and groups involved in implementing the environmental procedures. Annexes contain Environmental Management: The Bank's Policy Approach, examples of environmental documentation for the project file and other ancillary information

  12. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  13. Applied optics

    International Nuclear Information System (INIS)

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed

  14. A Favré averaged transition prediction model for hypersonic flows

    Institute of Scientific and Technical Information of China (English)

    LEE; ChunHian

    2010-01-01

    Transition prediction is crucial for aerothermodynamic and thermal protection system design of hypersonic vehicles.The compressible form of laminar kinetic energy equation is derived based on Favréaverage formality in the present paper.A closure of the equation is deduced and simplified under certain hypotheses and scaling analysis.A laminar-to-turbulent transition prediction procedure is proposed for high Mach number flows based on the modeled Favré-averaged laminar kinetic energy equation,in conjunction with the Favré-averaged Navier-Stokes equations.The proposed model,with and without associated explicit compressibility terms,is then applied to simulate flows over flared-cones with a free-stream Mach number of 5.91,and the onset locations of the boundary layer transition under different wall conditions are estimated.The computed onset locations are compared with those obtained by the model based on a compressibility correction deduced from the reference-temperature concept,together with experimental data.It is revealed that the present model gives a more favorable transition prediction for hypersonic flows.

  15. Decision-making Procedures

    DEFF Research Database (Denmark)

    Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher

    2009-01-01

    define procedures as mechanisms that influence the probabilities of reaching different endnodes. We show that for such procedural games a sequential psychological equilibrium always exists. Applying this approach within a principal-agent context we show that the way less attractive jobs are allocated is...... crucial for the effort exerted by agents. This prediction is tested in a field experiment, where some subjects had to type in data, whereas others had to verify the data inserted by the typists. The controllers' wage was 50% higher than that of the typists. In one treatment the less attractive typists...

  16. Behavioral implications of shortlisting procedures

    OpenAIRE

    Tyson, Christopher J.

    2012-01-01

    We consider two-stage "shortlisting procedures" in which the menu of alternatives is first pruned by some process or criterion and then a binary relation is maximized. Given a particular first-stage process, our main result supplies a necessary and sufficient condition for choice data to be consistent with a procedure in the designated class. This result applies to any class of procedures with a certain lattice structure, including the cases of "consideration filters," "satisficing with salie...

  17. Inversion of the circular averages transform using the Funk transform

    International Nuclear Information System (INIS)

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering

  18. Developing Competency in Payroll Procedures

    Science.gov (United States)

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  19. Radiochemical procedures

    International Nuclear Information System (INIS)

    The modern counting instrumentation has largely obviated the need for separation processes in the radiochemical analysis but problems in low-level radioactivity measurement, environmental-type analyses, and special situations caused in the last years a renaissance of the need for separation techniques. Most of the radiochemical procedures, based on the classic works of the Manhattan Project chemists of the 1940's, were published in the National Nuclear Energy Series (NNES). Improvements such as new solvent extraction and ion exchange separations have been added to these methods throughout the years. Recently the Los Alamos Group have reissued their collected Radiochemical Procedures containing a short summary and review of basic inorganic chemistry - 'Chemistry of the Elements on the Basis of Electronic Configuration'. (A.L.)

  20. Average-cost based robust structural control

    Science.gov (United States)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  1. Coherent ensemble averaging techniques for impedance cardiography

    OpenAIRE

    Hurwitz, Barry E.; Shyu, Liang-Yu; Reddy, Sridhar P; Schneiderman, Neil; Nagel, Joachim H.

    1990-01-01

    EKG synchronized ensemble averaging of the impedance cardiogram tends to blur or suppress signal events due to signal jitter or event latency variability. Although ensemble averaging provides some improvement in the stability of the signal and signal to noise ratio under conditions of nonperiodic influences of respiration and motion, coherent averaging techniques were developed to determine whether further enhancement of the impedance cardiogram could be obtained. Physiological signals were o...

  2. MEASUREMENT AND MODELLING AVERAGE PHOTOSYNTHESIS OF MAIZE

    OpenAIRE

    ZS LÕKE

    2005-01-01

    The photosynthesis of fully developed maize was investigated in the Agrometeorological Research Station Keszthely, in 2000. We used LI-6400 type measurement equipment to locate measurement points where the intensity of photosynthesis mostly nears the average. So later we could obtain average photosynthetic activities featuring the crop, with only one measurement. To check average photosynthesis of maize we used Goudriaan’s simulation model (CMSM) as well to calculate values on cloudless sampl...

  3. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  4. Applied geodesy

    International Nuclear Information System (INIS)

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  5. Applied mathematics

    International Nuclear Information System (INIS)

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed

  6. Applying industrial engineering practices to radiology.

    Science.gov (United States)

    Rosen, Len

    2004-01-01

    Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to

  7. A note on generalized averaged Gaussian formulas

    Science.gov (United States)

    Spalevic, Miodrag

    2007-11-01

    We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

  8. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  9. Labour Turnover Costs and Average Labour Demand

    OpenAIRE

    Bertola, Giuseppe

    1991-01-01

    The effect of labour turnover costs on average employment in a partial equilibrium model of labour demand, depends on the form of the revenue function, on the rates of discount and labour attrition, and on the relative size of hiring and firing costs. If discount and attrition rates are strictly positive, firing costs may well increase average employment even when hiring costs reduce it.

  10. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the applicable emission limitation in § 76.5, 76.6, or 76.7,...

  11. The Hubble rate in averaged cosmology

    International Nuclear Information System (INIS)

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions

  12. Average luminosity distance in inhomogeneous universes

    CERN Document Server

    Kostov, Valentin

    2010-01-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, and includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A form...

  13. Applying radiation

    International Nuclear Information System (INIS)

    The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction

  14. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    Science.gov (United States)

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  15. Time averaging of instantaneous quantities in HYDRA

    Energy Technology Data Exchange (ETDEWEB)

    McCallen, R.C.

    1996-09-01

    For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.

  16. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    Science.gov (United States)

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  17. Clarifying the relationship between average excesses and average effects of allele substitutions

    Directory of Open Access Journals (Sweden)

    Jose M eÁlvarez-Castro

    2012-03-01

    Full Text Available Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one-locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance.

  18. From moving averages to anomalous diffusion: a Rényi-entropy approach

    International Nuclear Information System (INIS)

    Moving averages, also termed convolution filters, are widely applied in science and engineering at large. As moving averages transform inputs to outputs by convolution, they induce correlation. In effect, moving averages are perhaps the most fundamental and ubiquitous mechanism of transforming uncorrelated inputs to correlated outputs. In this paper we study the correlation structure of general moving averages, unveil the Rényi-entropy meaning of a moving-average's overall correlation, address the maximization of this overall correlation, and apply this overall correlation to the dispersion-measurement and to the classification of regular and anomalous diffusion transport processes. (fast track communication)

  19. Self-averaging characteristics of spectral fluctuations

    OpenAIRE

    Braun, Petr; Haake, Fritz

    2014-01-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found f...

  20. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  1. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  2. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  3. Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations

    Science.gov (United States)

    Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

    2011-03-01

    Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

  4. Vibration monitor for rotating machines using average frequency technique

    International Nuclear Information System (INIS)

    A vibration monitoring technique has been developed which can be applied to continuous monitoring and to patrol checking of many kinds of rotating machines in nuclear power plants. In this method, the vibrating condition in such equipment are represented in terms of two parameters, i.e. a vibration amplitude (RMS value) and an average frequency. The average frequency is defined as the root value of the second moment of the vibration frequency weighted by the power spectrum. The average frequency can be calculated by simple analogue circuits and does not need the spectrum analysis. Using these two parameter, not only the occurrence of abnormal vibration but also the type of vibration can be detected. (author)

  5. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...

  6. 40 CFR 86.1866-12 - CO2 fleet average credit programs.

    Science.gov (United States)

    2010-07-01

    ... the test procedure provisions specified in 40 CFR part 600 subpart B and using the calculation... 40 Protection of Environment 19 2010-07-01 2010-07-01 false CO2 fleet average credit programs. 86...-Duty Trucks, and Complete Otto-Cycle Heavy-Duty Vehicles § 86.1866-12 CO2 fleet average credit...

  7. Averaging analysis for discrete time and sampled data adaptive systems

    Science.gov (United States)

    Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.

    1986-01-01

    Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.

  8. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  9. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  10. Average Vegetation Growth 1991 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1991 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  11. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  12. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. Average Vegetation Growth 1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  14. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  15. Average Vegetation Growth 2003 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  16. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using...

  18. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets...

  20. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  1. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  3. Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...

  4. Resonance Averaged Photoionization Cross Sections for Astrophysical Models

    CERN Document Server

    Bautista, M A; Pradhan, A K

    1997-01-01

    We present ground state photoionization cross sections of atoms and ions averaged over resonance structures for photoionization modeling of astrophysical sources. The detailed cross sections calculated in the close-coupling approximation using the R-matrix method, with resonances delineated at thousands of energies, are taken from the Opacity Project database TOPbase and the Iron Project, including new data for the low ionization stages of iron Fe I--V. The resonance-averaged cross sections are obtained by convolving the detailed cross sections with a Gaussian distribution over the autoionizing resonances. This procedure is expected to minimize errors in the derived ionization rates that could result from small uncertainties in computed positions of resonances, while preserving the overall resonant contribution to the cross sections in the important near threshold regions. The detailed photoionization cross sections at low photon energies are complemented by new relativistic distorted-wave calculations for Z1...

  5. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  6. Development of average wages in CR regions

    OpenAIRE

    Bejvlová, Jana

    2013-01-01

    The purpose of this study is to analyse trends in average gross monthly earnings of employees – individuals - in particular regions of the Czech Republic. The analysed time series begin in 2000 as the regions were decisively established on 1st January 2000. Moreover the self-governing competencies were introduced by the Act No. 129/2000 Coll., on Regions (Establishment of Regions). The researched period ends in 2010. Based on model construction of referential sets, the study predicts average ...

  7. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  8. Hyperplane Arrangements with Large Average Diameter

    OpenAIRE

    Deza, Antoine; Xie, Feng

    2007-01-01

    The largest possible average diameter of a bounded cell of a simple hyperplane arrangement is conjectured to be not greater than the dimension. We prove that this conjecture holds in dimension 2, and is asymptotically tight in fixed dimension. We give the exact value of the largest possible average diameter for all simple arrangements in dimension 2, for arrangements having at most the dimension plus 2 hyperplanes, and for arrangements having 6 hyperplanes in dimension 3. In dimension 3, we g...

  9. The Hubble rate in averaged cosmology

    OpenAIRE

    Umeh, Obinna; Larena, Julien; Clarkson, Chris

    2010-01-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaitre-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H_0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate ...

  10. Averaging Problem in Cosmology and Macroscopic Gravity

    OpenAIRE

    Zalaletdinov, Roustam

    2007-01-01

    The Averaging problem in general relativity and cosmology is discussed. The approach of macroscopic gravity to resolve the problem is presented. An exact cosmological solution to the equations of macroscopic gravity is given and its properties are discussed. Contents: 1. Introduction to General Relativity 2. General Relativity -> Relativistic Cosmology 3. Introduction to Relativistic Cosmology 4. Relativistic Cosmology -> Mathematical Cosmology 5. Averaging Problem in Relativistic Cosmology 6...

  11. Method of averaging in Clifford algebras

    OpenAIRE

    Shirokov, D. S.

    2014-01-01

    In this paper we consider different operators acting on Clifford algebras. We consider Reynolds operator of Salingaros' vee group. This operator average" an action of Salingaros' vee group on Clifford algebra. We consider conjugate action on Clifford algebra. We present a relation between these operators and projection operators onto fixed subspaces of Clifford algebras. Using method of averaging we present solutions of system of commutator equations.

  12. Modeling and Instability of Average Current Control

    OpenAIRE

    Fang, Chung-Chieh

    2012-01-01

    Dynamics and stability of average current control of DC-DC converters are analyzed by sampled-data modeling. Orbital stability is studied and it is found unrelated to the ripple size of the orbit. Compared with the averaged modeling, the sampled-data modeling is more accurate and systematic. An unstable range of compensator pole is found by simulations, and is predicted by sampled-data modeling and harmonic balance modeling.

  13. Disk-averaged synthetic spectra of Mars

    OpenAIRE

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a f...

  14. Model averaging for semiparametric additive partial linear models

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.

  15. Self-averaging characteristics of spectral fluctuations

    Science.gov (United States)

    Braun, Petr; Haake, Fritz

    2015-04-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second, a small imaginary part of the quasi-energy. Self-averaging universal (like the circular unitary ensemble (CUE) average) behavior is found for the smoothed correlator, apart from noise which shrinks like 1/\\sqrt{N} as the dimension N of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as N\\to ∞ such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order 1/{{N}2}) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE-type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.

  16. Original article Functioning of memory and attention processes in children with intelligence below average

    OpenAIRE

    Aneta Rita Borkowska; Anna Ozimek

    2014-01-01

    BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the au...

  17. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  18. Light-cone averages in a swiss-cheese universe

    CERN Document Server

    Marra, Valerio; Matarrese, Sabino

    2007-01-01

    We analyze a toy swiss-cheese cosmological model to study the averaging problem. In our model, the cheese is the EdS model and the holes are constructed from a LTB solution. We study the propagation of photons in the swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the the expansion scalar is unaffected by the inhomogeneities. This is because of spherical symmetry. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the concordance model. Although the sole source in the swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we ...

  19. 多措并举加强适用简易程序刑事案件公诉人出庭支持公诉%Taking measures to strengthen the prosecu- tors" appearing in court to support the pros- ecution in criminal cases applied summary procedure.

    Institute of Scientific and Technical Information of China (English)

    肖红

    2012-01-01

    Applying summary procedures to handling criminal cases that the defendant pleads guilty is the inevitable choice to solve the contradictions that cases are more than oflficers in the procuratorial organs in recent years, but also the inevitable re- quirement of maintaining justice and pro- tecting the legitimate rights and interests of the parties. This article aims to analyze the situation that the prosecutors" appearing in court in criminal cases applied summary procedure interpret the impact and chal- lenges of the amendment of the Code of Criminal Procedure to the grass-roots procuratorates work, and explore new ini- tiatives to implement the prosecutors" ap- pearing in court to support the prosecution in criminal cases applied summary proce- dure.%适用简易程序办理被告人认罪的刑事案件,是解决近年来检察机关案多人少矛盾的必然选择,也是维护司法公正,保障当事人合法权益的必然要求。本文旨在通过对简易程序刑事案件公诉人出庭现状进行分析,解读刑事诉讼法修改对基层检察院工作带来的影响和挑战,探求践行简易程序刑事案件公诉人出庭支持公诉的新举措。

  20. Comparison of Mouse Brain DTI Maps Using K-space Average, Image-space Average, or No Average Approach

    OpenAIRE

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-01-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data was collected from five ...

  1. Transfer-matrix scaling from disorder-averaged correlation lengths for diluted Ising systems

    Science.gov (United States)

    de Queiroz, S. L. A.; Stinchcombe, R. B.

    1994-10-01

    A transfer-matrix-scaling technique is developed for randomly diluted systems, and applied to the site-diluted Ising model on a square lattice in two dimensions. For each allowed disorder configuration between two adjacent columns, the contribution of the respective transfer matrix to the decay of correlations is considered only as far as the ratio of its two largest eigenvalues, allowing an economical calculation of a configuration-averaged correlation length. Standard phenomenological-renormalization procedures are then used to analyze aspects of the phase boundary which are difficult to assess accurately by alternative methods. For magnetic site concentration p close to pc, the extent of exponential behavior of the Tc×p curve is clearly seen for over two decades of variation of p-pc. Close to the pure-system limit, the exactly known reduced slope is reproduced to a very good approximation, though with nonmonotonic convergence. The averaged correlation lengths are inserted into the exponent-amplitude relationship predicted by conformal invariance to hold at criticality. The resulting exponent η remains near the pure value (1/4) for all intermediate concentrations until it crosses over to the percolation value at the threshold.

  2. Volume average technique for turbulent flow simulation and its application to room airflow prediction

    Science.gov (United States)

    Huang, Xianmin

    Fluid motion turbulence is one of the most important transport phenomena occurring in engineering applications. Although turbulent flow is governed by a set of conservation equations for momentum, mass, and energy, a Direct Numerical Simulation (DNS) of the flow by solving these equations to include the finest scale motions is impossible due to the extremely large computer resources required. On the other hand, the Reynolds Averaged Modelling (RAM) method has many limitations which hinder its applications to turbulent flows of practical significance. Room airflow featuring co- existence of laminar and turbulence regimes is a typical example of a flow which is difficult to handle with the RAM method. A promising way to avoid the difficulty of the DNS method and the limitation of the RAM method is to use the Large Eddy Simulation (LES) method. In the present thesis, the drawbacks of previously developed techniques for the LES method, particularly those associated with the SGS modelling, are identified. Then a new so called Volume Average Technique (VAT) for turbulent flow simulation is proposed. The main features of the VAT are as follows: (1) The volume averaging approach instead of the more common filtering approach is employed to define solvable scale fields, so that coarse- graining in the LES and space discretization of the numerical scheme are achieved in a single procedure. (2) All components of the SGS Reynolds stress and SGS turbulent heat flux are modelled dynamically using the newly proposed Functional Scale Similarity (FSS) SGS model. The model is superior to many previously developed SGS models in that it can be applied to highly inhomogeneous and/or anisotropic, weak or multi-regime turbulent flows using a relatively coarse grid. (3) The so called SGS turbulent diffusion is identified and modelled as a separate mechanism to that of the SGS turbulent flux represented by the SGS Reynolds stress and SGS turbulent heat flux. The SGS turbulent diffusion is

  3. 40 CFR 89.514 - Hearing procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Hearing procedures. 89.514 Section 89.514 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... Auditing § 89.514 Hearing procedures. The procedures provided in § 86.1014-84 (i) to (s) apply for...

  4. 40 CFR 90.514 - Hearing procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Hearing procedures. 90.514 Section 90.514 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... Auditing § 90.514 Hearing procedures. The procedures provided in § 86.1014-84 (i) to (s) apply for...

  5. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  6. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  7. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  8. Average Cycle Period in Asymmetrical Flashing Ratchet

    Institute of Scientific and Technical Information of China (English)

    WANG Hai-Yan; HE Hou-Sheng; BAO Jing-Dong

    2005-01-01

    The directed motion of a Brownian particle in a flashing potential with various transition probabilities and waiting times in one of two states is studied. An expression for the average cycle period is proposed and the steady current J of the particle is calculated via Langevin simulation. The results show that the optimal cycle period rm,which takes the maximum of J, is shifted to a small value when the transition probability λ from the potential on to the potential off decreases, the maximalcurrent appears in the case of the average waiting time in the potential on being longer than in the potential off, and the direction of current depends on the ratio of the average times waiting in two states.

  9. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  10. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  11. Books Average Previous Decade of Economic Misery

    OpenAIRE

    R Alexander Bentley; Alberto Acerbi; Paul Ormerod; Vasileios Lampos

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is signific...

  12. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  13. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four...

  14. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  15. An improved moving average technical trading rule

    Science.gov (United States)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  16. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  17. 75 FR 16575 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2010-04-01

    ... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board. ACTION: Adoption of a railroad cost recovery procedures productivity adjustment. SUMMARY... productivity adjustment, as measured by the average change in railroad productivity for the years 2004...

  18. Calibration procedure for zenith plummets

    Directory of Open Access Journals (Sweden)

    Jelena GUČEVIĆ

    2013-09-01

    Full Text Available Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and some selected results.

  19. Calibration procedure for zenith plummets

    OpenAIRE

    Jelena GUČEVIĆ; Delčev, Siniša; Vukan OGRIZOVIĆ

    2013-01-01

    Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and som...

  20. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  1. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  2. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  3. A Gaussian Average Property for Banach Spaces

    OpenAIRE

    Casazza, Peter G.; Nielsen, Niels Jorgen

    1996-01-01

    In this paper we investigate a Gaussian average property of Banach spaces. This property is weaker than the Gordon Lewis property but closely related to this and other unconditional structures. It is also shown that this property implies that certain Hilbert space valued operators defined on subspaces of the given space can be extended.

  4. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  5. A Functional Measurement Study on Averaging Numerosity

    Science.gov (United States)

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  6. Reformulation of Ensemble Averages via Coordinate Mapping.

    Science.gov (United States)

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  7. A Measure of the Average Intercorrelation

    Science.gov (United States)

    Meyer, Edward P.

    1975-01-01

    Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)

  8. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  9. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  10. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...

  11. High average-power induction linacs

    International Nuclear Information System (INIS)

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  12. Error estimates on averages of correlated data

    International Nuclear Information System (INIS)

    We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations. (orig.)

  13. Average Equivalent Diameter of A Particulate Material

    OpenAIRE

    AL-MAGHRABI, Mohammed-Noor N. H.

    2010-01-01

    In the field of mineral processing, it is important to determine the size of a particle. A method of defining an average diameter for a collection of particles is presented. The theoretical basis developed for the purpose is verified by a specially designed experimental technique.  Key words: mineral processing, particle size, equivalent diameter

  14. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for...

  15. The Law of Aggregate Demand : Empirical Evidence From India Using Nonparametric Direct Average Derivative Estimation procedure

    OpenAIRE

    Chakrabarty, Manisha

    2001-01-01

    This paper attempts to provide empirical evidence of the positive definiteness of the mean income effect matrix, a sufficient condition for market demand to satisfy the {\\it law of demand} derived by H\\"{a}rdle, Hildenbrand and Jerison [HHJ(1991)]. Increasing heterogeneity in spending of populations of households leads to this sufficient condition which is falsifiable from cross-section data. Based on this framework we use the National Sample Survey (NSS) 50-th round data (1993-1994) for the ...

  16. Software Release Procedure and Tools

    OpenAIRE

    Giammatteo, Gabriele; Frosini, Luca; Laskaris, Nikolas

    2015-01-01

    Deliverable D4.1 - "Software Release Procedures and Tools" aims to provide a detailed description of the procedures applied and tools used to manage releases of the gCube System within Work Package 4. gCube System is the software at the basis of all VREs applications, data management services and portals. Given the large size of the gCube system, its high degree of modularity and the number of developers involved in the implementation, a set of procedures that formalize and simplify the integ...

  17. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    not resistant towards outliers (which are known to be more the rule than the exception for empirical data). That is the reason for robust methods being a valuable tool for both semi-automated detection of outliers and model building. The approach adapted in this thesis, can be split in two main parts......: 1. applying a multivariate and multi-way data analytical frame-work in fields where less sophisticated data analysis methods are currently used, and 2. developing new, more robust alternatives to already existing multivariate tools. The first part of the study was realised by applying two- and three...... qualitative properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results...

  18. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    International Nuclear Information System (INIS)

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-barP, the average, U-bar, the effective, Ueff or the maximum peak, UP tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-barp voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak kPPV,kVp and the average kPPV,Uav conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-barp and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  19. Application of Network-averaged Teleseismic P-wave Spectra to Seismic Yield Estimation of Underground Nuclear Explosions

    Science.gov (United States)

    Murphy, J. R.; Barker, B. W.

    - A set of procedures is described for estimating network-averaged teleseismic P-wave spectra for underground nuclear explosions and for analytically inverting these spectra to obtain estimates of mb/yield relations and individual yields for explosions at previously uncalibrated test sites. These procedures are then applied to the analyses of explosions at the former Soviet test sites at Shagan River, Degelen Mountain, Novaya Zemlya and Azgir, as well as at the French Sahara, U.S. Amchitka and Chinese Lop Nor test sites. It is demonstrated that the resulting seismic estimates of explosion yield and mb/yield relations are remarkably consistent with a variety of other available information for a number of these test sites. These results lead us to conclude that the network-averaged teleseismic P-wave spectra provide considerably more diagnostic information regarding the explosion seismic source than do the corresponding narrowband magnitude measures such as mb, Ms and mb(Lg), and, therefore, that they are to be preferred for applications to seismic yield estimation for explosions at previously uncalibrated test sites.

  20. Tsallis’ entropy maximization procedure revisited

    Science.gov (United States)

    Martínez, S.; Nicolás, F.; Pennini, F.; Plastino, A.

    2000-11-01

    The proper way of averaging is an important question with regards to Tsallis’ Thermostatistics. Three different procedures have been thus far employed in the pertinent literature. The third one, i.e., the Tsallis-Mendes-Plastino (TMP) (Physica A 261 (1998) 534) normalization procedure, exhibits clear advantages with respect to earlier ones. In this work, we advance a distinct (from the TMP-one) way of handling the Lagrange multipliers involved in the extremization process that leads to Tsallis’ statistical operator. It is seen that the new approach considerably simplifies the pertinent analysis without losing the beautiful properties of the Tsallis-Mendes-Plastino formalism.

  1. Tsallis' entropy maximization procedure revisited

    CERN Document Server

    Martínez, S; Pennini, F; Plastino, A

    2000-01-01

    The proper way of averaging is an important question with regards to Tsallis' Thermostatistics. Three different procedures have been thus far employed in the pertinent literature. The third one, i.e., the Tsallis-Mendes-Plastino (TMP) normalization procedure, exhibits clear advantages with respect to earlier ones. In this work, we advance a distinct (from the TMP-one) way of handling the Lagrange multipliers involved in the extremization process that leads to Tsallis' statistical operator. It is seen that the new approach considerably simplifies the pertinent analysis without losing the beautiful properties of the Tsallis-Mendes-Plastino formalism.

  2. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  3. Trends and the determination of effective doses for standard X-ray procedures

    International Nuclear Information System (INIS)

    Trends in the entrance skin exposures (air kerma) for standard x-ray imaging procedures are reported for the Province of Manitoba, Canada. Average annual data per procedure using standard phantoms and standard ion chambers have been recorded since 1981. For example, chest air kerma (backscatter included) has decreased from 0.14 to 0.09 mGy. Confounding factors may negate the gains unless facility quality control programs are maintained. The data were obtained for a quality assurance and regulatory compliance program. Quoting such data for risk evaluation purposes lacks rigor hence a compartment model for organ apportioning, using organ absorbed doses and weighting factors, has been applied to determine effective dose per procedure. The effective doses for the standard procedures are presented, including the value of 0.027 mSv (1999) calculated for the effective dose in PA chest imaging. (author)

  4. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  5. Extended Bidirectional Texture Function Moving Average Model

    Czech Academy of Sciences Publication Activity Database

    Havlíček, Michal

    Praha: České vysoké učení technické v Praze, 2015 - (Ambrož, P.; Masáková, Z.), s. 1-7 [Doktorandské dny 2015. Praha (CZ), 20.11.2015,27.11.2015] Institutional support: RVO:67985556 Keywords : Bidirectional texture function * moving average random field model Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2016/RO/havlicek-0455325.pdf

  6. Average Drift Analysis and Population Scalability

    OpenAIRE

    He, Jun; Yao, Xin

    2013-01-01

    This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift...

  7. Average Regression-Adjusted Controlled Regenerative Estimates

    OpenAIRE

    Lewis, Peter A.W.; Ressler, Richard

    1991-01-01

    Proceedings of the 1991 Winter Simulation Conference Barry L. Nelson, W. David Kelton, Gordon M. Clark (eds.) One often uses computer simulations of queueing systems to generate estimates of system characteristics along with estimates of their precision. Obtaining precise estimates, espescially for high traffic intensities, can require large amounts of computer time. Average regression-adjusted controlled regenerative estimates result from combining the two techniques ...

  8. Time-dependent angularly averaged inverse transport

    OpenAIRE

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured al...

  9. A Visibility Graph Averaging Aggregation Operator

    OpenAIRE

    Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

    2013-01-01

    The problem of aggregation is considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compare...

  10. On Heroes and Average Moral Human Beings

    OpenAIRE

    Kirchgässner, Gebhard

    2001-01-01

    After discussing various approaches about heroic behaviour in the literature, we first give a definition and classification of moral behaviour, in distinction to intrinsically motivated and ‘prudent' behaviour. Then, we present some arguments on the function of moral behaviour according to ‘minimal' standards of the average individual in a modern democratic society, before we turn to heroic behaviour. We conclude with some remarks on methodological as well as social problems which arise or ma...

  11. Dollar-Cost Averaging: An Investigation

    OpenAIRE

    Fang, Wei

    2007-01-01

    Dollar-cost Averaging (DCA) is a common and useful systematic investment strategy for mutual fund managers, private investors, financial analysts and retirement planners. The issue of performance effectiveness of DCA is greatly controversial among academics and professionals. As a popularly recommended investment strategy, DCA is recognized as a risk reduction strategy; however, the advantage was claimed as the expense of generating higher returns. The dissertation is to intensively inves...

  12. Modern average global sea-surface temperature

    Science.gov (United States)

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  13. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  14. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  15. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  16. Light-cone averages in a Swiss-cheese universe

    International Nuclear Information System (INIS)

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model

  17. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  18. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  19. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  20. GRID PRICING VERSUS AVERAGE PRICING FOR SLAUGHTER CATTLE: AN EMPIRICAL ANALYSIS

    OpenAIRE

    Scott W. Fausti; Qasmi, Bashir A.

    1999-01-01

    The paper compares weekly producer revenue under grid pricing and average dressed weight pricing methods for 2560 cattle over a period of 102 weeks. Regression analysis is applied to identify factors affecting the revenue differential.

  1. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  2. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  3. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  4. PROFILE OF HIRED FARMWORKERS, 1998 ANNUAL AVERAGES

    OpenAIRE

    Runyan, Jack L.

    2000-01-01

    An average of 875,000 persons 15 years of age and older did hired farmwork each week as their primary job in 1998. An additional 63,000 people did hired farmwork each week as their secondary job. Hired farmworkers were more likely than the typical U.S. wage and salary worker to be male, Hispanic, younger, less educated, never married, and not U.S. citizens. The West (42 percent) and South (31.4 percent) census regions accounted for almost three-fourths of the hired farmworkers. The rate of un...

  5. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  6. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  7. Time-dependent angularly averaged inverse transport

    CERN Document Server

    Bal, Guillaume

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.

  8. High average power laser for EUV lithography

    Energy Technology Data Exchange (ETDEWEB)

    Kania, D.R.; Gaines, D.P.; Hermann, M.; Honig, J.; Hostetler, R.; Levesque, R.; Sommargren, G.E.; Spitzer, R.C.; Vernon, S.P.

    1995-01-19

    We have demonstrated the operation of a high average power, all solid state laser and target system for EUV lithography. The laser operates at 1.06 {mu}m with a pulse repetition rate of 200 Hz. Each pulse contains up to 400 mJ of energy and is less than 10 ns in duration. The ELTV conversion efficiency measured with the laser is independent of the laser repetition rate. Operating at 200 Hz, the laser has been used for lithography using a 3 bounce Kohler illuminator.

  9. Some averaging functions in image reduction

    Czech Academy of Sciences Publication Activity Database

    Paternain, D.; Bustince, H.; Fernández, J.; Beliakov, G.; Mesiar, Radko

    Berlin: Springer, 2010 - (García-Pedrajas, N.; Herrera, F.; Benítez, J.), s. 399-408. (Lecture Notes in Artificial Intelligence . 6098). ISBN 978-3-642-13032-8. ISSN 0302-9743. [IEA/AIE 2010. Cordoba (ES), 01.06.2010-04.06.2010] Institutional research plan: CEZ:AV0Z10750506 Keywords : image reduction * local reduction operators * aggregation functions Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2010/E/mesiar-some averaging functions in image reduction.pdf

  10. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  11. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    OpenAIRE

    Pawel Szczesniak

    2015-01-01

    In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two m...

  12. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε(∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε(∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  13. Computation of the Metric Average of 2D Sets with Piecewise Linear Boundaries

    OpenAIRE

    Kels, Shay; Dyn, Nira; Lipovetsky, Evgeny

    2010-01-01

    The metric average is a binary operation between sets in Rn which is used in the approximation of set-valued functions. We introduce an algorithm that applies tools of computational geometry to the computation of the metric average of 2D sets with piecewise linear boundaries.

  14. Weak Averaging of Semilinear Stochastic Differential Equations with Almost Periodic Coefficients

    OpenAIRE

    Kamenski, Mikhail; Mellah, Omar; Raynaud de Fitte, Paul

    2012-01-01

    An averaging result is proved for stochastic evolution equations with highly oscillating coefficients. This result applies in particular to equations with almost periodic coefficients. The convergence to the solution of the averaged equation is obtained in distribution, as in previous works by Khasminskii and Vrko\\v{c}.

  15. COMPLEX INNER PRODUCT AVERAGING METHOD FOR CALCULATING NORMAL FORM OF ODE

    Institute of Scientific and Technical Information of China (English)

    陈予恕; 孙洪军

    2001-01-01

    This paper puts forward a complex inner product averaging method for calculating normal form of ODE. Compared with conventional averaging method, the theoretic analytical process has such simple forms as to realize computer program easily.Results can be applied in both autonomous and non-autonomous systems. At last, an example is resolved to verify the method.

  16. Formulation of the Strutinsky's averaging method in Hartree-Fock-Roothaan framework: Open Shell Systems

    International Nuclear Information System (INIS)

    Strutinsky's shell-correction method is applied in the framework of the Hartree-Fock method developed by Roothaan (HFR) in the atoms and ions with an open shell. Applying the Strutinsky energy averaging to the density matrix define a self-consistently average HFR system as a solution of the variational problem. We derive the explicit expressions for the shell corrections. 3 refs. (author)

  17. Rademacher averages on noncommutative symmetric spaces

    CERN Document Server

    Merdy, Christian Le

    2008-01-01

    Let E be a separable (or the dual of a separable) symmetric function space, let M be a semifinite von Neumann algebra and let E(M) be the associated noncommutative function space. Let $(\\epsilon_k)_k$ be a Rademacher sequence, on some probability space $\\Omega$. For finite sequences $(x_k)_k of E(M), we consider the Rademacher averages $\\sum_k \\epsilon_k\\otimes x_k$ as elements of the noncommutative function space $E(L^\\infty(\\Omega)\\otimes M)$ and study estimates for their norms $\\Vert \\sum_k \\epsilon_k \\otimes x_k\\Vert_E$ calculated in that space. We establish general Khintchine type inequalities in this context. Then we show that if E is 2-concave, the latter norm is equivalent to the infimum of $\\Vert (\\sum y_k^*y_k)^{{1/2}}\\Vert + \\Vert (\\sum z_k z_k^*)^{{1/2}}\\Vert$ over all $y_k,z_k$ in E(M) such that $x_k=y_k+z_k$ for any k. Dual estimates are given when E is 2-convex and has a non trivial upper Boyd index. We also study Rademacher averages for doubly indexed families of E(M).

  18. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  19. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    Science.gov (United States)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  20. Accurate determination of crystal structures based on averaged local bond order parameters

    OpenAIRE

    Lechner, Wolfgang; Dellago, Christoph

    2008-01-01

    Local bond order parameters based on spherical harmonics, also known as Steinhardt order parameters, are often used to determine crystal structures in molecular simulations. Here we propose a modification of this method in which the complex bond order vectors are averaged over the first neighbor shell of a given particle and the particle itself. As demonstrated using soft particle systems, this averaging procedure considerably improves the accuracy with which different crystal structures can ...

  1. The averaging of multi-dimensional Poisson brackets for systems having pseudo-phases

    OpenAIRE

    Maltsev, A. Ya.

    2014-01-01

    We consider features of the Hamiltonian formulation of the Whitham method in the presence of pseudo-phases. As we show, an analog of the procedure of averaging of the Poisson bracket with the reduced number of the first integrals can be suggested in this case. The averaged bracket gives a Poisson structure for the corresponding Whitham system having the form similar to the structures arising in the presence of ordinary phases.

  2. 75 FR 5170 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2010-02-01

    ... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed Railroad Cost Recovery Procedures Productivity Adjustment. SUMMARY... of average change in railroad productivity for the 2004-2008 (5-year) averaging period. This is...

  3. Average resonance parameters evaluation for actinides

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    New evaluated <{Gamma}{sub n}{sup 0}> and values for {sup 238}U, {sup 237}Np, {sup 243}Cm, {sup 245}Cm, {sup 246}Cm and {sup 241}Am nuclei in the resolved resonance region are presented. The applied method based on the idea that experimental resonance missing results in correlated changes of reduced neutron widths and level spacings distributions is discussed. (author)

  4. Resonance averaged channel radiative neutron capture cross sections

    International Nuclear Information System (INIS)

    In order to apply Lane amd Lynn's channel capture model in calculations with a realistic optical model potential, we have derived an approximate wave function for the entrance channel in the neutron-nucleus reaction, based on the intermediate interaction model. It is valid in the exterior region as well as the region near the nuclear surface, ans is expressed in terms of the wave function and reactance matrix of the optical model and of the near-resonance parameters. With this formalism the averaged channel radiative neutron capture cross section in the resonance region is written as the sum of three terms. The first two terms correspond to contribution of the optical model real and imaginary parts respectively, and together can be regarded as the radiative capture of the shape elastic wave. The third term is a fluctuation term, corresponding to the radiative capture of the compound elastic wave in the exterior region. On applying this theory in the resonance region, we obtain an expression for the average valence radiative width similar to that of Lane and Mughabghab. We have investigated the magnitude and energy dependence of the three terms as a function of the neutron incident energy. Calculated results for 98Mo and 55Mn show that the averaged channel radiative capture cross section in the giant resonance region of the neutron strength function may account for a considerable fraction of the total (n, γ) cross section; at lower neutron energies a large part of this channel capture arises from the fluctuation term. We have also calculated the partial capture cross section in 98Mo and 55Mn at 2.4 keV and 24 keV, respectively, and compared the 98Mo results with the experimental data. (orig.)

  5. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Science.gov (United States)

    Arapiraca, A. F. C.; Mohallem, J. R.

    2016-04-01

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  6. Loss of lifetime due to radiation exposure-averaging problems.

    Science.gov (United States)

    Raicević, J J; Merkle, M; Ehrhardt, J; Ninković, M M

    1997-04-01

    A new method is presented for assessing a years of life lost (YLL) due to stochastic effects caused by the exposure to ionizing radiation. The widely accepted method from the literature uses a ratio of means of two quantities, defining in fact the loss of life as a derived quantity. We start from the real stochastic nature of the quantity (YLL), which enables us to obtain its mean values in a consistent way, using the standard averaging procedures, based on the corresponding joint probability density functions needed in this problem. Our method is mathematically different and produces lower values of average YLL. In this paper we also found certain similarities with the concept of loss of life expectancy among exposure induced deaths (LLE-EID), which is accepted in the recently published UNSCEAR report, where the same quantity is defined as years of life lost per radiation induced case (YLC). Using the same data base, the YLL and the LLE-EID are calculated and compared for the simplest exposure case-the discrete exposure at age a. It is found that LLE-EID overestimates the YLL, and that the magnitude of this overestimation reaches more than 15%, which depends on the effect under consideration. PMID:9119679

  7. A Moving Average Bidirectional Texture Function Model

    Czech Academy of Sciences Publication Activity Database

    Havlíček, Michal; Haindl, Michal

    Vol. II. Heidelberg: Springer, 2013 - (Wilson, R.; Bors, A.; Hancock, E.; Smith, W.), s. 338-345. (Lecture Notes in Computer Science. 8048). ISBN 978-3-642-40245-6. ISSN 0302-9743. [International Conference on Computer Analysis of Images and Patterns (CAIP 2013) /15./. York (GB), 27.08.2013-29.08.2013] R&D Projects: GA ČR GA102/08/0593; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BTF * texture analysis * texture synthesis * data compression Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/havlicek-a moving average bidirectional texture function model.pdf

  8. Averaging lifetimes for B hadron species

    International Nuclear Information System (INIS)

    The measurement of the lifetimes of the individual B species are of great interest. Many of these measurements are well below the 10% level of precision. However, in order to reach the precision necessary to test the current theoretical predictions, the results from different experiments need to be averaged together. Therefore, the relevant systematic uncertainties of each measurement need to be well defined in order to understand the correlations between the results from different experiments. In this paper we discuss the dominant sources of systematic errors which lead to correlations between the different measurements. We point out problems connected with the conventional approach of combining lifetime data and discuss methods which overcome these problems. (orig.)

  9. The Lang-Trotter Conjecture on Average

    OpenAIRE

    Baier, Stephan

    2006-01-01

    For an elliptic curve $E$ over $\\ratq$ and an integer $r$ let $\\pi_E^r(x)$ be the number of primes $p\\le x$ of good reduction such that the trace of the Frobenius morphism of $E/\\fie_p$ equals $r$. We consider the quantity $\\pi_E^r(x)$ on average over certain sets of elliptic curves. More in particular, we establish the following: If $A,B>x^{1/2+\\epsilon}$ and $AB>x^{3/2+\\epsilon}$, then the arithmetic mean of $\\pi_E^r(x)$ over all elliptic curves $E$ : $y^2=x^3+ax+b$ with $a,b\\in \\intz$, $|a...

  10. Electromagnetic modes induced by averaged geodesic curvature

    International Nuclear Information System (INIS)

    Full text: Kinetic theory of geodesic acoustic and related modes is developed with emphasis on the electromagnetic effects due to electron parallel motion, higher order dispersion and drift effects. In general, dispersion of GAM is determined by the ion sound Larmor radius, ion Larmor radius, and electron inertia. Relative contribution of these effects depends on the particular regime and mode localization. It is shown that there are exist new type of electromagnetic (Alfven) modes induced by averaged geodesic curvature. It is shown that the fluid limit of the kinetic dispersion relation is exactly recovered by the extended MHD (Grad hydrodynamics) exactly recovers the kinetic dispersion relation for geodesic acoustic modes (GAMs). The coupling of modes of different polarization is investigated within the extended MHD and kinetic models. The role of drift effects, in particular, electron temperature gradient on GAMs and related modes is investigated. (author)

  11. Average transverse momentum quantities approaching the lightfront

    CERN Document Server

    Boer, Daniel

    2014-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

  12. Average prime-pair counting formula

    Science.gov (United States)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  13. Medical decision making for patients with Parkinson disease under Average Cost Criterion.

    Science.gov (United States)

    Goulionis, John E; Vozikis, Athanassios

    2009-01-01

    Parkinson's disease (PD) is one of the most common disabling neurological disorders and results in substantial burden for patients, their families and the as a whole society in terms of increased health resource use and poor quality of life. For all stages of PD, medication therapy is the preferred medical treatment. The failure of medical regimes to prevent disease progression and to prevent long-term side effects has led to a resurgence of interest in surgical procedures. Partially observable Markov decision models (POMDPs) are a powerful and appropriate technique for decision making. In this paper we applied the model of POMDP's as a supportive tool to clinical decisions for the treatment of patients with Parkinson's disease. The aim of the model was to determine the critical threshold level to perform the surgery in order to minimize the total lifetime costs over a patient's lifetime (where the costs incorporate duration of life, quality of life, and monetary units). Under some reasonable conditions reflecting the practical meaning of the deterioration and based on the various diagnostic observations we find an optimal average cost policy for patients with PD with three deterioration levels. PMID:19549341

  14. Development of an Advanced Flow Meter using the Averaging Bi-directional Flow Tube

    International Nuclear Information System (INIS)

    Advanced flow meter using the concept of averaging bi-directional flow tube was developed. To find characteristics of flow meter and derive theory of measurement in the single and two phase flow condition, some basic tests were attempted using flow meters with diameters of 27, 80 and 200 mm. The CFD(computational fluid dynamics) calculation was also performed to find the effects of temperature and pressure, and to optimize design of a prototypic flow meter. Following this procedure, prototypical flow meters with diameters of 200 and 500 mm were designed and manufactured. It is aimed to use in the region in which calibration constant was unchanged. The stress analysis showed that the proposed flow meter of H-beam shape is inherently strong against the bending force induced by flow. The flow computer was developed for the flow rate calculation from the measured pressure difference. In this study, the performance test using this prototype flow meter was carried out. The developed flow meter can be applied in the wide range of pressure and temperature. The basic tests showed that the lineality of the proposed flow meter is ± 0.5 % of full scale and flow turn down ratio is 1:20 where the Reynolds number is larger than 10,000

  15. Procedural justice and intragroup status: Knowing where we stand in a group enhances reactions to procedures

    OpenAIRE

    Prooijen, J.-W. van; Bos, K. van den; Wilke, H.A.M.

    2005-01-01

    The current research investigates the role of relative intragroup status as a moderator of peoples reactions to procedural justice. Based on a review of the procedural justice literature, the authors argue that information about intragroup status influences peoples reactions to variations in procedural justice. In correspondence with predictions, two experiments show that reactions of people who have been informed about their intragroup status position (either low, average, or high) are influ...

  16. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  17. Average path length for Sierpinski pentagon

    CERN Document Server

    Peng, Junhao

    2011-01-01

    In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortest path lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1...

  18. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  19. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  20. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  1. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    Science.gov (United States)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  2. Safety analysis procedures for PHWR

    Energy Technology Data Exchange (ETDEWEB)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4.

  3. 18 CFR 301.4 - Exchange Period Average System Cost determination.

    Science.gov (United States)

    2010-04-01

    ... Distribution Plant, Bonneville will escalate the Base Period average per-MWh cost of Distribution Plant forward... Distribution plant values in the Base Period, and then multiplying the Base Period ratio times the forecasted value for Production, Transmission, and Distribution plant. (13) Bonneville will issue procedural...

  4. A New CFAR Detector Based on Automatic Censoring Cell Averaging and Cell Averaging

    Directory of Open Access Journals (Sweden)

    Yuhua Qin

    2013-06-01

    Full Text Available In order to improve the interference immunity of the detector, a new CFAR detector (ACGCA-CFAR based on automatic censoring cell averaging (ACCA and cell averaging (CA is presented in this paper. It takes the greatest value of ACCA and CA local estimation as the noise power estimation. Under swerling II assumption, the analytic expressions of  in homogeneous background are derived. In contrast to other detectors, the ACGCA-CFAR detector has higher detection performance both in homogeneous and nonhomogeneous backgrounds, while the sample sorting time of ACGCA is only quarter that of OS and ACCA.    

  5. Global Average Brightness Temperature for April 2003

    Science.gov (United States)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1 This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  6. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    Science.gov (United States)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  7. 應用虛擬團隊於數位媒體設計之溝通策略與合作流程 A Study of Applying Virtual Team to the Communication Strategy and Procedure for Collaboration in a Digital Media Design Project

    Directory of Open Access Journals (Sweden)

    Wei-Ru Chen

    2003-12-01

    Full Text Available 藉由資訊傳播科技的應用,數位媒體設計得以虛擬團隊的合作方式,整合分散各地不同領域的專業人才,共同完成設計任務。本研究針對業界目前應用虛擬合作之數位媒體設計團隊進行個案訪談,探討設計團隊如何以虛擬合作之方式來進行設計活動,歸納出虛擬團隊的溝通策略與合作流程,並分析數位媒體設計團隊進行虛擬合作之優劣勢。研究結果顯示,運用虛擬團隊之合作方式確有其需求,然其必要性與效益則應考量三個主要面向:(一團隊建置的目標與成員架構,(二團隊連結所使用的工具與溝通資訊,以及(三團隊設計任務與虛擬合作流程。The development of information communication technology enables a digital media design project to apply virtual team to its communication strategy and procedure for collaboration. This study discussed the needs for building a virtual team of digital media design, and how it works. The researchers explored 4 cases to examine the problems faced by each team in the design process. The findings of this study showed that the concept of the virtual team applied to the digital media design is valid and effective. However, a successful virtual teamwork requires the following conditions: 1. welldefined team target and healthy member structure; 2. proper communication tools and design information; and 3. the well-organized procedure for collaboration.

  8. Risk-Sensitive and Average Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Karviná: Silesian University in Opava, School of Busines Administration in Karviná, 2012 - (Ramík, J.; Stavárek, D.), s. 799-804 ISBN 978-80-7248-779-0. [30th International Conference Mathematical Methods in Economics 2012. Karviná (CZ), 11.09.2012-13.09.2012] R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : dynamic programming * stochastic models * risk analysis and management Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/Sladky-risk-sensitive and average optimality in markov decision processes .pdf

  9. Spatial Games Based on Pursuing the Highest Average Payoff

    Institute of Scientific and Technical Information of China (English)

    YANG Han-Xin; WANG Bing-Hong; WANG Wen-Xu; RONG Zhi-Hai

    2008-01-01

    We propose a strategy updating mechanism based on pursuing the highest average payoff to investigate the prisoner's dilemma game and the snowdrift game. We apply the new rule to investigate cooperative behaviours on regular, small-world, scale-free networks, and find spatial structure can maintain cooperation for the prisoner's dilemma game. In the snowdrift game, spatial structure can inhibit or promote cooperative behaviour which depends on payoff parameter. We further study cooperative behaviour on scale-free network in detail. Interestingly, non-monotonous behaviours observed on scale-free network with middle-degree individuals have the lowest cooperation level. We also find that large-degree individuals change their strategies more frequently for both games.

  10. COMPTEL Time-Averaged All-Sky Point Source Analysis

    CERN Document Server

    Collmar, W; Strong, A W; Blömen, H; Hermsen, W; McConnell, M; Ryan, J; Bennett, K

    1999-01-01

    We use all COMPTEL data from the beginning of the CGRO mission (April '91) upto the end of CGRO Cycle 6 (November '97) to carry out all-sky point sourceanalyses in the four standard COMPTEL energy bands for different time periods.We apply our standard maximum-likelihood method to generate all-skysignificance and flux maps for point sources by subtracting off the diffuseemission components via model fitting. In addition, fluxes of known sourceshave been determined for individual CGRO Phases/Cycles to generate lightcurveswith a time resolution of the order of one year. The goal of the analysis is toderive quantitative results -- significances, fluxes, light curves -- of ourbrightest and most significant sources such as 3C 273, and to search foradditional new COMPTEL sources, showing up in time-averaged maps only.

  11. Local versus average field failure criterion in amorphous polymers

    International Nuclear Information System (INIS)

    There is extensive work developing laws that predict yielding in amorphous polymers, ranging from the pioneer experimental work of Sternstein et al (1968 Appl. Polym. Symp. 7 175–99) to the novel molecular dynamics simulations of Jaramillo et al (2012 Phys. Rev. B 85 024114). While atomistic models render damage criteria in terms of local values of the stress and strain fields, experiments provide yield conditions in terms of the average values of these fields. Unfortunately, it is not possible to compare these results due to the differences in time and length scales. Here, we use a micromechanical phase-field damage model with parameters calculated from atomistic simulations to connect atomistic and macroscopic scale experiments. The phase-field damage model is used to study failure in composite materials. We find that the yield criterion should be described in terms of local stress and strains fields and cannot be extended directly from applied stress field values to determine yield conditions. (paper)

  12. Hearing Office Average Processing Time Ranking Report, April 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  13. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  14. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  15. Multifractal detrending moving-average cross-correlation analysis

    Science.gov (United States)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the

  16. Procedures for analyzing the effectiveness of siren systems for alerting the public

    Energy Technology Data Exchange (ETDEWEB)

    Keast, D.N.; Towers, D.A.; Anderson, G.S.; Kenoyer, J.L.; Desrosiers, A.E.

    1982-09-01

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations.

  17. Procedures for analyzing the effectiveness of siren systems for alerting the public

    International Nuclear Information System (INIS)

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations

  18. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    Science.gov (United States)

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  19. Average Fe K-alpha emission from distant AGN

    CERN Document Server

    Corral, A; Carrera, F J; Barcons, X; Mateos, S; Ebrero, J; Krumpe, M; Schwope, A; Tedds, J A; Watson, M G

    2008-01-01

    One of the most important parameters in the XRB (X-ray background) synthesis models is the average efficiency of accretion onto SMBH (super-massive black holes). This can be inferred from the shape of broad relativistic Fe lines seen in X-ray spectra of AGN (active galactic nuclei). Several studies have tried to measure the mean Fe emission properties of AGN at different depths with very different results. We compute the mean Fe emission from a large and representative sample of AGN X-ray spectra up to redshift ~ 3.5. We developed a method of computing the rest-frame X-ray average spectrum and applied it to a large sample (more than 600 objects) of type 1 AGN from two complementary medium sensitivity surveys based on XMM-Newton data, the AXIS and XWAS samples. This method makes use of medium-to-low quality spectra without needing to fit complex models to the individual spectra but with computing a mean spectrum for the whole sample. Extensive quality tests were performed by comparing real to simulated data, a...

  20. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n0)ln(n-n0) + b(n-n0) + c where a, b and c are constants depending on K and n0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length ne(K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g>

  1. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author)

  2. New procedure for departure formalities

    CERN Multimedia

    HR & GS Departments

    2011-01-01

    As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...

  3. Tortuosity and the Averaging of Microvelocity Fields in Poroelasticity.

    Science.gov (United States)

    Souzanchi, M F; Cardoso, L; Cowin, S C

    2013-03-01

    The relationship between the macro- and microvelocity fields in a poroelastic representative volume element (RVE) has not being fully investigated. This relationship is considered to be a function of the tortuosity: a quantitative measure of the effect of the deviation of the pore fluid streamlines from straight (not tortuous) paths in fluid-saturated porous media. There are different expressions for tortuosity based on the deviation from straight pores, harmonic wave excitation, or from a kinetic energy loss analysis. The objective of the work presented is to determine the best expression for tortuosity of a multiply interconnected open pore architecture in an anisotropic porous media. The procedures for averaging the pore microvelocity over the RVE of poroelastic media by Coussy and by Biot were reviewed as part of this study, and the significant connection between these two procedures was established. Success was achieved in identifying the Coussy kinetic energy loss in the pore fluid approach as the most attractive expression for the tortuosity of porous media based on pore fluid viscosity, porosity, and the pore architecture. The fabric tensor, a 3D measure of the architecture of pore structure, was introduced in the expression of the tortuosity tensor for anisotropic porous media. Practical considerations for the measurement of the key parameters in the models of Coussy and Biot are discussed. In this study, we used cancellous bone as an example of interconnected pores and as a motivator for this study, but the results achieved are much more general and have a far broader application than just to cancellous bone. PMID:24891725

  4. An adjunctive minor surgical procedure for increased rate of retraction

    Directory of Open Access Journals (Sweden)

    Prabhakar Krishnan

    2013-01-01

    Full Text Available Introduction: Orthodontic treatment is based on the principle that if prolonged pressure is applied to the tooth, tooth movement will occur as the bone around the tooth re-models. In this study osteotomy of buccal alveolar plate and undermining of interseptal bone was performed at premolar extraction site and rate of en-masse retraction and canine retraction was evaluated. Materials and Methods: Patients between the age of 18 and 25 years, requiring retraction of anterior teeth are selected for the study. Osteotomy with undermining of interseptal bone at the extraction site was performed. The procedure was performed on all four quadrants. Results: The average retraction in the maxillary arch was 0.98 mm/quadrant in 3 weeks, i.e., a total retraction of 5.89 mm in a span of 9 weeks. The average retraction in the mandibular arch was 0.96 mm/quadrant in 3 weeks, i.e., a total retraction of 5.75 mm in a span of 9 weeks. Conclusion: This method of achieving faster en masse retraction immediately after extraction definitely reduced the initial retraction time. We recommend that such procedure must be carried out with appropriate anchorage conservation methods.

  5. Comparison of conventional averaged and rapid averaged, autoregressive-based extracted auditory evoked potentials for monitoring the hypnotic level during propofol induction

    DEFF Research Database (Denmark)

    Litvan, Héctor; Jensen, Erik W; Galan, Josefina; Lund, Jeppe; Rodriguez, Bernardo E; Henneberg, Steen W; Caminal, Pere; Villar Landeira, Juan M

    2002-01-01

    The extraction of the middle latency auditory evoked potentials (MLAEP) is usually done by moving time averaging (MTA) over many sweeps (often 250-1,000), which could produce a delay of more than 1 min. This problem was addressed by applying an autoregressive model with exogenous input (ARX) that...

  6. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    Science.gov (United States)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  7. Correlation of average scaling coefficient with asymmetric parameter and average power index with quadrupole deformation parameter

    International Nuclear Information System (INIS)

    The nuclear structure of even-even nuclei in ground state band and other excited bands with non zero band head is collectively built. The level energy in medium mass region deviates below the ideal rotor energy formula EI = AI(I+1). The average scaling coefficient with asymmetric parameter and bAV rises for Er-Os nuclei when N increases from 88 to 104

  8. Forecasting Equity Premium: Global Historical Average versus Local Historical Average and Constraints

    OpenAIRE

    Tae-Hwy Lee; Yundong Tu; Aman Ullah

    2014-01-01

    The equity premium, return on equity minus return on risk-free asset, is expected to be positive. We consider imposing such positivity constraint in local historical average (LHA) in nonparametric kernel regression framework. It is also extended to the semiparametric single index model when multiple predictors are used. We construct the constrained LHA estimator via an indicator function which operates as `model-selection' between the unconstrained LHA and the bound of the constraint (zero fo...

  9. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...

  10. Applied large eddy simulation.

    Science.gov (United States)

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  11. Loss of reliable temporal structure in event-related averaging of naturalistic stimuli

    Science.gov (United States)

    Ben-Yakov, Aya; Honey, Christopher, J.; Lerner, Yulia; Hasson, Uri

    2014-01-01

    To separate neural signals from noise, brain responses measured in neuroimaging are routinely averaged across space and time. However, such procedures may obscure some properties of neural activity. Recently, multi-voxel pattern analysis methods have demonstrated that patterns of activity across voxels contain valuable information that is concealed by spatial averaging. Here we show that temporal patterns of neural activity contain information that can discriminate different stimuli, even within brain regions that show no net activation to that stimulus class. Furthermore, we find that in many brain regions, responses to natural stimuli are highly context dependent. In such cases, prototypical event-related responses do not even exist for individual stimuli, so that averaging responses to the same stimulus within different contexts may worsen the effective signal-to-noise. As a result, analysis of the temporal structures of single events can reveal aspects of neural dynamics which cannot be detected using standard event-related averaging methods. PMID:22813575

  12. Deliberations on the selection of a sterilization dose for product of low average bioburden

    International Nuclear Information System (INIS)

    ISO/TC 198 Working Group 2 is charged with the revision of ISO Standard 11137:1995. In reviewing dose setting Method 1, it has attempted to rationalize the selection of the lower limit of average bioburden for product to which Method 1 can be applied. Provisionally, this limit has been set at 1.0. The Working Group has also considered three proposals for handling sterilization dose selection for product with an average bioburden less than 1.0

  13. Deliberations on the selection of a sterilization dose for product of low average bioburden

    Energy Technology Data Exchange (ETDEWEB)

    Tallentire, A. E-mail: tallent@adl.u-net.com

    2004-10-01

    ISO/TC 198 Working Group 2 is charged with the revision of ISO Standard 11137:1995. In reviewing dose setting Method 1, it has attempted to rationalize the selection of the lower limit of average bioburden for product to which Method 1 can be applied. Provisionally, this limit has been set at 1.0. The Working Group has also considered three proposals for handling sterilization dose selection for product with an average bioburden less than 1.0.

  14. Deliberations on the selection of a sterilization dose for product of low average bioburden

    Science.gov (United States)

    Tallentire, A.

    2004-09-01

    ISO/TC 198 Working Group 2 is charged with the revision of ISO Standard 11137:1995. In reviewing dose setting Method 1, it has attempted to rationalize the selection of the lower limit of average bioburden for product to which Method 1 can be applied. Provisionally, this limit has been set at 1.0. The Working Group has also considered three proposals for handling sterilization dose selection for product with an average bioburden less than 1.0.

  15. Procedures in diagnostic radiology

    International Nuclear Information System (INIS)

    A book has been written on the procedures used in diagnostic radiology covering the subject areas of the urinary tract, gastrointestinal and biliary tracts, vascular radiology, cerebral angiography and arthrography. The explanation of each procedure follows a common layout which includes indications, equipment, technique and complications. The book is intended to be a reference book for radiology trainees learning to do practical procedures for the first time and also for practising radiologists not habitually performing certain diagnostic procedures. (UK)

  16. Implementation of procedures to NPP Krsko INTRANEK

    International Nuclear Information System (INIS)

    Part of NEK documentation has already been presented on NEK Intranet such as USAR, Technical Specifications, QA Plan as well as some frequently used series of drawings. At the time being the process of presentation of all procedures (thereinafter INTRANEK procedures) is in progress. The purpose of this project is the presentation of 1600 procedures with average size of 30 pages what is more than 48000 pages altogether. ADOBE PDF (Portable Document Format) has been chosen as the most suitable format for the presentation of procedures on INTRANEK. PDF format meets the following criteria: the outlook of a document page is always the same as an original one and cannot be changed without control. In addition to this, full text search is available as well as easy jump from procedure to procedure. Some changes of working process on internal procedures have to be made before the project start, which determine the responsibility of individual users in the process. The work flow, which enables easy daily maintenance, has been prepared, the rules of both procedure numbering as well as folder contents/name have been set and the server selected. The project was managed and implemented with the extensive use of compute-aided management, document distribution and control, databases, electronics mail and Intranet tools. The results of practical implementation of NEK procedures and our experience with INTRANEK are presented in this paper.(author)

  17. Impulse Parameters Determination using the Moving Average Method for Digitally Measured Lightning Impulse Voltage Record

    Science.gov (United States)

    Sato, Shuji; Harada, Tatsuya; Kaneko, Kohei; Tsubouchi, Yoshifumi

    This paper clarifies an important fact that the time parameters evaluated from the mean curve obtained after applying moving average are identical to the ones deduced from the original lightning impulse voltage. Digital recording device is always associated with random thermal noise caused by semi-conductor components used in the pre-amplifier in the recorder. The fact makes it possible for us to get rid of random noise superposed on the measured impulse wave form by means of moving average method. Also using the finding, even jittery noisy records can be averaged to a smooth curve from which one can easily distil identical time parameters.

  18. Radiation exposure to staff and patients during two endocrinological procedures

    International Nuclear Information System (INIS)

    The purpose of the present work is to obtain information about the exposure to patient and staff during percutaneous nephrolithotripsy and ureteroscopy with intracorporeal lithotripsy and to search for a correlation between these parameters. The collected data for each procedure consist of the total air kerma-area product, PKA, cumulative dose, CD, fluoroscopy time, FT, number of images acquired, as well as clinical patient data. Average, minimum, maximum and median values were calculated for 38 patients. Mean values and median in parentheses were as follows: 355 (383) cGy cm2 (PKA for PCNL); 433 (286) cGy cm2 (PKA for URS); 42 (37) mGy (CD for PCNL); 12 (7) mGy (CD for URS); 3.5 (3.0) min (FT for PCNL); 1.4 (1.3) min (FT for URS). The typical operator doses for PCNL and URS were assessed to be 66.1 μSv and 34.3 μSv, respectively, while the maximum doses for the same type of procedures were 152.6 μSv and 124.1 μSv. Good correlation was observed between the staff dose and PKA for both procedures, while the correlation of staff dose with CD and FT was found to be weak. While applying principles of radiation protection and normal load in the clinic, there is no possibility to exceed the new annual dose limit for eye lens of 20 mSv per year averaged over 5 years. The correlation of PKA with FT and CD was also explored and no significant interconnection was observed. (authors)

  19. Foresee sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect%预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果

    Institute of Scientific and Technical Information of China (English)

    岳利莹; 马冬萍; 姜培英

    2014-01-01

    Objective The investigation foresees sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect. Methods The sufferer who random selects by examinations my hospital to carry on surgical operation at my ophthalmology out-patient service in the hospital from November, 2011 to November, 2012 is 222, is divided into its nursing set and matched control.Foresee sex nursing procedure to sufferer’s adoption of nursing set, adopt normal regulations nursing procedure to the sufferer of matched control.Compare two sets of sufferers the mental state before doing surgical operations, the blood pressure variety in the surgical operation process, mindset variety still has after the surgical operation ends whether appear a dizzy Jue phenomenon. Results Nursing set sufferer of of the blood pressure value, mental condition still has already appeared dizzy Jue all the rate once was good friends with a matched control, and the its difference has statistics to learn meaning. Conclusion To ophthalmology out-patient service surgical operation the sufferer carry on foresee sex nursing intervention can improve a sufferer of mindset, ease the nervous feeling of the sufferer to the surgical operation, effectively lower a sufferer to take place dizzy Jue of all rate.%目的:探究预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果。方法:随机选取我院2011年11月到2012年11月在我院眼科门诊进行手术的患者222例,将其分为护理组和对照组。对护理组的患者采用预见性护理程序,对对照组患者采用常规护理程序。比较两组患者在执行手术前的精神状态,手术过程中的血压变化、心态变化还有在手术结束后是否出现晕厥现象。结果:护理组患者的的血压值、心理状况还有出现晕厥的概率要好过对照组,且其差异性具有统计学意义。结论:对眼科门诊手术患者

  20. Crew procedures development techniques

    Science.gov (United States)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.

    1975-01-01

    The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.

  1. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  2. Assessment of Average Tracer Concentration Approach for Flow Rate Measurement and Field Calibration

    Directory of Open Access Journals (Sweden)

    P. Sidauruk

    2015-12-01

    Full Text Available Tracer method is one of the methods available for open channel flow rate measurements such as in irrigation canals. Average tracer concentration approach is an instantaneous injection method that based on the average tracer concentrations value at the sampling point. If the procedures are correct and scientific considerations are justified, tracer method will give relatively high accuracy of measurements. The accuracy of the average tracer concentration approach has been assessed both in laboratory and field. The results of accuracy tests of open channel flow that has been conducted at the Center for Application Isotopes and Radiation Laboratory-BATAN showed that the accuracy level of average concentrations approach method was higher than 90% compared to the true value (volumetric flow rate. The accuracy of average tracer concentration approach was also assessed during the application of the method to measure flow rate of Mrican irrigation canals as an effort to perform field calibration of existing weirs. Both average tracer concentration approach and weirs can predict the trend of the flow correctly. However, it was observed that flow discrepancies between weirs measurement and average tracer concentration approach predictions were as high as 27%. The discrepancies might be due to the downgrading performances of the weirs because of previous floods and high sediment contents of the flow

  3. Increasing compliance with medical procedures: application of the high-probability request procedure to a toddler.

    OpenAIRE

    McComas, J J; Wacker, D P; Cooper, L J

    1998-01-01

    The effects of high-probability (high-p) requests on compliance with low-probability (low-p) responses have received increased attention from applied investigators. This study examined the effects of a high-p procedure on a toddler's compliance with medical procedures. Compliance to low-p requests occurred more frequently following compliance to high-p requests, suggesting that this procedure may be useful across different topographies of compliance.

  4. Averaging Tesseral Effects: Closed Form Relegation versus Expansions of Elliptic Motion

    Directory of Open Access Journals (Sweden)

    Martin Lara

    2013-01-01

    Full Text Available Longitude-dependent terms of the geopotential cause nonnegligible short-period effects in orbit propagation of artificial satellites. Hence, accurate analytical and semianalytical theories must cope with tesseral harmonics. Modern algorithms for dealing analytically with them allow for closed form relegation. Nevertheless, current procedures for the relegation of tesseral effects from subsynchronous orbits are unavoidably related to orbit eccentricity, a key fact that is not enough emphasized and constrains application of this technique to small and moderate eccentricities. Comparisons with averaging procedures based on classical expansions of elliptic motion are carried out, and the pros and cons of each approach are discussed.

  5. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  6. Estimation of the average correlation coefficient for stratified bivariate data.

    Science.gov (United States)

    Rubenstein, L M; Davis, C S

    1999-03-15

    If the relationship between two ordered categorical variables X and Y is influenced by a third categorical variable with K levels, the Cochran-Mantel-Haenszel (CMH) correlation statistic QC is a useful stratum-adjusted summary statistic for testing the null hypothesis of no association between X and Y. Although motivated by and developed for the case of K I x J contingency tables, the correlation statistic QC is also applicable when X and Y are continuous variables. In this paper we derive a corresponding estimator of the average correlation coefficient for K I x J tables. We also study two estimates of the variance of the average correlation coefficient. The first is a restricted variance based on the variances of the observed cell frequencies under the null hypothesis of no association. The second is an unrestricted variance based on an asymptotic variance derived by Brown and Benedetti. The estimator of the average correlation coefficient works well in tables with balanced and unbalanced margins, for equal and unequal stratum-specific sample sizes, when correlation coefficients are constant over strata, and when correlation coefficients vary across strata. When the correlation coefficients are zero, close to zero, or the cell frequencies are small, the confidence intervals based on the restricted variance are preferred. For larger correlations and larger cell frequencies, the unrestricted confidence intervals give superior performance. We also apply the CMH statistic and proposed estimators to continuous non-normal data sampled from bivariate gamma distributions. We compare our methods to statistics for data sampled from normal distributions. The size and power of the CMH and normal theory statistics are comparable. When the stratum-specific sample sizes are small and the distributions are skewed, the proposed estimator is superior to the normal theory estimator. When the correlation coefficient is zero or close to zero, the restricted confidence intervals

  7. Procedure generation and verification

    International Nuclear Information System (INIS)

    The Department of Energy has used Artificial Intelligence of ''AI'' concepts to develop two powerful new computer-based techniques to enhance safety in nuclear applications. The Procedure Generation System, and the Procedure Verification System, can be adapted to other commercial applications, such as a manufacturing plant. The Procedure Generation System can create a procedure to deal with the off-normal condition. The operator can then take correct actions on the system in minimal time. The Verification System evaluates the logic of the Procedure Generator's conclusions. This evaluation uses logic techniques totally independent of the Procedure Generator. The rapid, accurate generation and verification of corrective procedures can greatly reduce the human error, possible in a complex (stressful/high stress) situation

  8. Procedure system principles

    International Nuclear Information System (INIS)

    Procedures establish how an organization performs specific tasks to accomplish its basic goals and objectives. In 1990, the US Department of Energy (DOE) started the process of defining requirements to be used for procedures throughout the DOE nuclear complex. In response to the customer needs of the complex, the department's initial steps in this process were to develop two documents: a writer's guide for technical procedures and the principles of procedure writing upon which it was based. The US Nuclear Regulatory Commission and Institute for Nuclear Power Operations literature addresses the control and use of procedures. There was, however, no regulatory guidance on developing and managing a procedure system. In 1991, an effort was undertaken to provide such guidance for DOE nuclear facilities. A companion paper by Grider and Plung describes how a performance-tested advisory group developed this guidance. This paper describes the procedure system principles that were developed through this process

  9. On the Individual Expectations of Non-Average Investors

    OpenAIRE

    Lucia Del Chicca; Gerhard Larcher

    2011-01-01

    An “average investor” is an investor who has “average risk aversion”, “average expectations” on the market returns and should invest in the “market portfolio” (this is, according to the Capital Asset Pricing Model, the best possible portfolio for such an investor). He is compared with a “non-average investor”. This - in our setting - is an investor who has the same “average risk aversion” but invests in other investment strategies, for example options. Such a “`non-average investor” must cons...

  10. Interval and fuzzy Average Internal Rate of Return for investment appraisal

    OpenAIRE

    Maria Letizia Guerra; Carlo Alberto Magni; Luciano Stefanini

    2012-01-01

    In investment appraisal, uncertainty can be managed through intervals or fuzzy numbers because the arithmetical properties and the extension principle are well established and can be successfully applied in a rigorous way. We apply interval and fuzzy numbers to the Average Internal Rate of Return (AIRR), recently introduced for overcoming the problems of the traditional Internal Rate of Return (IRR). In the setting of interval and fuzzy arithmetic, we establish relations between the interim c...

  11. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  12. Cost averaging techniques for robust control of flexible structural systems

    Science.gov (United States)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  13. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  14. The SU(N) Wilson Loop Average in 2 Dimensions

    OpenAIRE

    Karjalainen, Esa

    1993-01-01

    We solve explicitly a closed, linear loop equation for the SU(2) Wilson loop average on a two-dimensional plane and generalize the solution to the case of the SU(N) Wilson loop average with an arbitrary closed contour. Furthermore, the flat space solution is generalized to any two-dimensional manifold for the SU(2) Wilson loop average and to any two-dimensional manifold of genus 0 for the SU(N) Wilson loop average.

  15. Resolution improvement by 3D particle averaging in localization microscopy

    International Nuclear Information System (INIS)

    Inspired by recent developments in localization microscopy that applied averaging of identical particles in 2D for increasing the resolution even further, we discuss considerations for alignment (registration) methods for particles in general and for 3D in particular. We detail that traditional techniques for particle registration from cryo electron microscopy based on cross-correlation are not suitable, as the underlying image formation process is fundamentally different. We argue that only localizations, i.e. a set of coordinates with associated uncertainties, are recorded and not a continuous intensity distribution. We present a method that owes to this fact and that is inspired by the field of statistical pattern recognition. In particular we suggest to use an adapted version of the Bhattacharyya distance as a merit function for registration. We evaluate the method in simulations and demonstrate it on 3D super-resolution data of Alexa 647 labelled to the Nup133 protein in the nuclear pore complex of Hela cells. From the simulations we find suggestions that for successful registration the localization uncertainty must be smaller than the distance between labeling sites on a particle. These suggestions are supported by theoretical considerations concerning the attainable resolution in localization microscopy and its scaling behavior as a function of labeling density and localization precision. (paper)

  16. Analytic continuation by averaging Padé approximants

    Science.gov (United States)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  17. Model characteristics of average skill boxers’ competition functioning

    Directory of Open Access Journals (Sweden)

    Martsiv V.P.

    2015-08-01

    Full Text Available Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round. Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. It has been established that sportsmanship of boxers manifests as increase of punches’ density in a fight. It has also been found that increase of coefficient of punches’ effectiveness results in expansion of arsenal of technical-tactic actions. Importance of consideration of standard specialized loads has been confirmed. Conclusions: we have recommended means to be applied in training process at this stage of training. On the base of our previous researches we have made recommendations on complex assessment of sportsmen-students’ skillfulness. Besides, we have shown approaches to improvement of different sides of sportsmen’s fitness.

  18. 40 CFR 401.13 - Test procedures for measurement.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Test procedures for measurement. 401.13... AND STANDARDS GENERAL PROVISIONS § 401.13 Test procedures for measurement. The test procedures for measurement which are prescribed at part 136 of this chapter shall apply to expressions of pollutant...

  19. On the apparent saturation of the average disorientation angle with plastic deformation

    DEFF Research Database (Denmark)

    Pantleon, W.

    2005-01-01

    Contradictory results have been concluded from transmission electron microscopy studies of disorientations evolving during plastic deformation: whereas saturation of the average disorientation angles across all dislocation boundaries was reported, a continued increase was observed, if different b...... boundary types were distinguished. The apparent saturation is traced to a nontrivial artifact of the evaluation procedure. (c) 2005 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  20. Near-elastic vibro-impact analysis by discontinuous transformations and averaging

    OpenAIRE

    Thomsen, Jon Juel; Fidlin, Alexander

    2008-01-01

    We show how near-elastic vibro-impact problems, linear or nonlinear in-between impacts, can be conveniently analyzed by a discontinuity-reducing transformation of variables combined with an extended averaging procedure. A general technique for this is presented, and illustrated by calculating transient or stationary motions for different harmonic oscillators with stops or clearances, and self-excited friction oscillators with stops or clearances First- and second-order analytical predictions ...

  1. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  2. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...

  3. Sample Size Bias in Judgments of Perceptual Averages

    Science.gov (United States)

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  4. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  5. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  6. Evaluation of the average ion approximation for a tokamak plasma

    International Nuclear Information System (INIS)

    The average ion approximation, sometimes used to calculated atomic processes in plasmas, is assessed by computing deviations in various rates over a set of conditions representative of tokamak edge plasmas. Conditions are identified under which the rates are primarily a function of the average ion charge and plasma parameters, as assumed in the average ion approximation. (Author) 19 refs., tab., 5 figs

  7. Optimization of CT procedures

    International Nuclear Information System (INIS)

    Full text: In recent years computed tomography (CT) became a powerful diagnostic method. The technology advances in CT allowed improvement of the image quality, but set a number of challenges to all professionals working in the field of diagnostic imaging. CT capabilities expand, and thus increase the need for better training and qualification of staff and engineers responsible for the optimal functioning of the CT system. Despite the variety of technical innovations for dose reduction, obtaining of images with good diagnostic quality is often associated with increased dose. There is a lack of consensus in radiological practice about the use of contrast media and the image quality requirements. A common opinion is that the manufacturer setting is optimal. All this leads to large variations of doses for the same examinations and shows the need to optimize procedures. Learning objectives: optimization is not a single act but a process involving all the experts conducting the study. The presence of qualified medical physicist as a part of the team is important. Responsibility of the team is to choose which procedures have to be optimized. In most cases, the choice is between the most common procedures performed and those suspected in the diagnostic value of the image. Special attention should be paid to the study of children in order to exclude the use of protocols for adults, which leads to unnecessary high doses.each member of the team must be aware of the relationship between patient dose and image quality. the choice how to conduct the study depends on the diagnostic purpose. In some examinations, such as CT urography, obtaining high quality image at the cost of a higher dose does not improve the diagnostic outcome. On the other hand optimization is not necessarily associated with dose reduction. In cases where the images are not with adequate diagnostic quality, it is necessary to use a CT protocol with higher dose. Several international documents show the

  8. Civil Procedure In Denmark

    DEFF Research Database (Denmark)

    Werlauff, Erik

    The book contains an up-to-date survey of Danish civil procedure after the profound Danish procedural reforms in 2007. It deals with questions concerning competence and function of Danish courts, commencement and preparation of civil cases, questions of evidence and burden of proof, international...... procedural questions, including relations to the Brussels I Regulation and Denmark's participation in this Regulation via a parallel convention with the EU countries, impact on Danish civil procedure of the convention on human rights, preparation and pronouncement of judgment and verdict, questions of appeal...... scientific activities conducted by the author, partly based on the author's experience as a member, through a number of years, of the Danish Standing Committee on Procedural Law (Retsplejeraadet), which on a continuous basis evaluates the need for civil procedural reforms in Denmark, and finally also based...

  9. Essays in Applied Microeconomics

    Science.gov (United States)

    Severnini, Edson Roberto

    This dissertation consists of three studies analyzing causes and consequences of location decisions by economic agents in the U.S. In Chapter 1, I address the longstanding question of the extent to which the geographic clustering of economic activity may be attributable to agglomeration spillovers as opposed to natural advantages. I present evidence on this question using data on the long-run effects of large scale hydroelectric dams built in the U.S. over the 20th century, obtained through a unique comparison between counties with or without dams but with similar hydropower potential. Until mid-century, the availability of cheap local power from hydroelectric dams conveyed an important advantage that attracted industry and population. By the 1950s, however, these advantages were attenuated by improvements in the efficiency of thermal power generation and the advent of high tension transmission lines. Using a novel combination of synthetic control methods and event-study techniques, I show that, on average, dams built before 1950 had substantial short run effects on local population and employment growth, whereas those built after 1950 had no such effects. Moreover, the impact of pre-1950 dams persisted and continued to grow after the advantages of cheap local hydroelectricity were attenuated, suggesting the presence of important agglomeration spillovers. Over a 50 year horizon, I estimate that at least one half of the long run effect of pre-1950 dams is due to spillovers. The estimated short and long run effects are highly robust to alternative procedures for selecting synthetic controls, to controls for confounding factors such as proximity to transportation networks, and to alternative sample restrictions, such as dropping dams built by the Tennessee Valley Authority or removing control counties with environmental regulations. I also find small local agglomeration effects from smaller dam projects, and small spillovers to nearby locations from large dams. Lastly

  10. Applied iterative methods

    CERN Document Server

    Hageman, Louis A

    2004-01-01

    This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a "red/black" block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and comp

  11. Automating Supplier Selection Procedures

    OpenAIRE

    Davidrajuh, Reggie

    2001-01-01

    This dissertation describes a methodology, tools, and implementation techniques of automating supplier selection procedures of a small and medium-sized agile virtual enterprise. Firstly, a modeling approach is devised that can be used to model the supplier selection procedures of an enterprise. This modeling approach divides the supplier selection procedures broadly into three stages, the pre-selection, selection, and post-selection stages. Secondly, a methodology is presented for automating ...

  12. Pyroshock prediction procedures

    Science.gov (United States)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  13. Electron-ion collisions in the average-configuration distorted-wave approximation

    International Nuclear Information System (INIS)

    Explicit expressions for the electron-impact excitation, ionization, and resonant-recombination cross sections are derived in the average-configuration distorted-wave approximation. Calculations using these expressions are applied to several types of phenomena in electron-ion scattering where comparison with other theoretical methods and experimental measurements can be made. 24 refs., 5 figs

  14. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    International Nuclear Information System (INIS)

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  15. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)

    2012-02-15

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  16. Average optimization of the approximate solution of operator equations and its application

    Institute of Scientific and Technical Information of China (English)

    WANG; xinghua(王兴华); MA; Wan(马万)

    2002-01-01

    In this paper, a definition of the optimization of operator equations in the average case setting is given. And the general result (Theorem 1) about the relevant optimization problem is obtained. This result is applied to the optimization of approximate solution of some classes of integral equations.

  17. Exponentially Weighted Moving Average Chart as a Suitable Tool for Nuchal Translucency Quality Review

    Czech Academy of Sciences Publication Activity Database

    Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana

    2014-01-01

    Roč. 34, č. 4 (2014), s. 367-376. ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014

  18. Surface Representation of Polycrystal Physical Properties: All Crystal Classes, Simple Average Approximation

    OpenAIRE

    Raymond, O.; Fuentes, L. (Lidia); Gómez, J. I.

    1996-01-01

    Algorithms for polycrystal physical properties estimation are presented. Bunge's spherical harmonics treatment of surface representations, under simple average approximation, is applied. Specific formulae for so-called longitudinal magnitudes are given. Physical properties associated to tensors of second-, third- and fourth-rank are considered. All crystal and sample symmetries are covered.

  19. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    Directory of Open Access Journals (Sweden)

    Pawel Szczesniak

    2015-03-01

    Full Text Available In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two modelling techniques are applied to a same AC-AC converter called matrix-reactance frequency converter based on buck-boost topology. These techniques are compared on the basis of their rapidity, quantity of calculations and transformations and its limitations.

  20. On averaging force curves over heterogeneous surfaces in atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, I., E-mail: sokolov@clarkson.edu [Department of Physics, Clarkson University, Potsdam, NY 13699 (United States); Nanoengineering and Biotechnology Laboratories Center (NABLAB), Clarkson University, NY 13699 (United States); Kalaparthi, V.; Kreshchuk, M.; Dokukin, M.E. [Department of Physics, Clarkson University, Potsdam, NY 13699 (United States)

    2012-10-15

    (AFM). Black-Right-Pointing-Pointer Averaging of force curves collected over heterogeneous surface is analyzed. Black-Right-Pointing-Pointer We showed that the averaging on force and distance axes are not equivalent. Black-Right-Pointing-Pointer Possible artifacts due to the averaging procedures are analyzed. Black-Right-Pointing-Pointer An example of the AFM study of human cervical epithelial cells is presented.

  1. Evaluation of annual average equivalent dose of workers for nuclear medicine facilities in the Northeast Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Lira, Renata F.; Silva Neto, Jose Almeida; Antonio Filho, Joao, E-mail: jaf@ufpe.br [Universidade Federal de Pernambuco (UFPE/DEN), Departamento de Energia Nuclear, Recife, PE (Brazil); Santos, Luiz A.P., E-mail: lasantos@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2011-07-01

    Nuclear Medicine (NM) is a radiation technique normally used to make therapeutic treatments or diagnosis. In this technique a small quantity of radioactive material combined with drugs is used to have the diagnostic images. Any activity involving ionizing radiation should be justified and it must have its working procedures to be optimized. The purpose of this paper is show the importance of optimization of the radiation protection systems and determines an optimal dose for occupational people in nuclear medicine. Such an optimization aims to avoid any possible contamination or accidents, and reduce costs of protection. The optimization for a service which manipulates ionizing radiation can be done using different techniques, and among other, we can mention the technique of expanded cost-benefit analysis. The data collection was divided into the equivalent dose annual average and the equivalent dose average in period. The database for this study was a survey of received doses from 87 occupational people of 10 nuclear medicine facilities in the northeast Brazil and it was made in a period of 13 years (1979-1991). The results show that the equivalent dose average in the period H was 2.39 mSv. Actually, since 1992 the analysis is in progress and it shows that equivalent dose annual average could reduce even more if procedures of work are followed correctly. (author)

  2. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Science.gov (United States)

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...

  3. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic...-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation in § 60.1935... calculate the 4-hour or 24-hour daily block averages (as applicable) for concentrations of carbon monoxide....

  4. LEVERAGE EFFECT FORECAST FOR THE YEAR 2014 THROUGH THE MOVING AVERAGE METHOD

    Directory of Open Access Journals (Sweden)

    HADA TEODOR

    2015-03-01

    Full Text Available It is very important for the proper development of various financial and economic activities to be an achievable goal. This can be determined by a forecast of a phenomenon in order to know where they could hover value. This paper is structured in three parts. The first part highlights the theoretical aspects of using the moving average method for determining the prognosis of a given phenomenon. The second part presents in detail the steps to follow within the moving average method. The phenomenon analyzed in this study is the leverage effect. We examine each step of the process, which will ultimately lead to a more precise forecast for the leverage effect. Each stage of the procedure is analyzed, which will lead in the end to a most accurate prognosis of the leverage effect. At the end of this paper, findings of practically using the moving average method, for establishing the forecast and its subsequent interpretations, will be presented.

  5. Hermitization and the Poisson bracket-commutator correspondence as a consequence of averaging

    Science.gov (United States)

    Pesci, Adriana I.; Goldstein, Raymond E.; Uys, Hermann

    2006-01-01

    Here we present a study of the solutions and mathematical structure of the momentum-averaged Liouville (or Collisionless Boltzmann) equation in Fourier space. We show that the averaging procedure leads to a formalism identical to that of the density matrix of quantum mechanics. This mathematical mapping leaves the averages of all quantities unaltered and provides a unique way to construct the Hermitian version of a given operator. This seems to be the only method that resolves the ambiguity of Hermitization of operators that contain products of non-commuting variables. We also present a systematic perturbation scheme to evaluate correctly the classical solutions from the quantum ones and a formal proof of the approximate correspondence between the Poisson brackets and commutators.

  6. Hermitization and the Poisson bracket-commutator correspondence as a consequence of averaging

    International Nuclear Information System (INIS)

    Here we present a study of the solutions and mathematical structure of the momentum-averaged Liouville (or Collisionless Boltzmann) equation in Fourier space. We show that the averaging procedure leads to a formalism identical to that of the density matrix of quantum mechanics. This mathematical mapping leaves the averages of all quantities unaltered and provides a unique way to construct the Hermitian version of a given operator. This seems to be the only method that resolves the ambiguity of Hermitization of operators that contain products of non-commuting variables. We also present a systematic perturbation scheme to evaluate correctly the classical solutions from the quantum ones and a formal proof of the approximate correspondence between the Poisson brackets and commutators

  7. Hermitization and the Poisson bracket-commutator correspondence as a consequence of averaging

    Energy Technology Data Exchange (ETDEWEB)

    Pesci, Adriana I [Department of Physics, University of Arizona, Tucson, AZ 85721 (United States); Goldstein, Raymond E [Department of Physics, University of Arizona, Tucson, AZ 85721 (United States); Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721 (United States); Uys, Hermann [Department of Physics, University of Arizona, Tucson, AZ 85721 (United States)

    2006-01-27

    Here we present a study of the solutions and mathematical structure of the momentum-averaged Liouville (or Collisionless Boltzmann) equation in Fourier space. We show that the averaging procedure leads to a formalism identical to that of the density matrix of quantum mechanics. This mathematical mapping leaves the averages of all quantities unaltered and provides a unique way to construct the Hermitian version of a given operator. This seems to be the only method that resolves the ambiguity of Hermitization of operators that contain products of non-commuting variables. We also present a systematic perturbation scheme to evaluate correctly the classical solutions from the quantum ones and a formal proof of the approximate correspondence between the Poisson brackets and commutators.

  8. Optimal Weights of Certain Branches of an Arbitrary Connected Network for Fastest Distributed Consensus Averaging Problem

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest distributed consensus averaging problem over networks with different topologies has been an active area of research for a number of years. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only communicate with their neighbors. In the previous works full knowledge about the network's topology was required for finding optimal weights and convergence rate of network, but here in this work for the first time the optimal weights are determined analytically for the edges of certain types of branches, namely path branch, lollipop branch, semi-complete Branch and Ladder branch independent of the rest of network. The solution procedure consists of stratification of associated connectivity graph of branch and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness c...

  9. Average neutron energy measurement at an accelerator facility, a practical health physics problem

    International Nuclear Information System (INIS)

    Surveys designed to estimate the average energy of neutrons escaping from shielded enclosures were made at the Clinton P. Anderson Meson Physics Facility (LAMPF). Information obtained from these data proved to be of value in establishing response factors for the personnel monitoring dosimeters (NTA film). The value of measuring average neutron energies was demonstrated about one year ago, when the monthly film badge report indicated significant neutron exposures to personnel of an experimental group at LAMPF. Neutron radiation-survey data, as well as recorded data from area monitoring stations, indicated much lower personnel neutron doses. Subsequent average neutron energy measurements, at the location of interest, revealed a well-defined region where average neutron energies were considerably greater than previously measured in other occupied areas of the accelerator facility. A discussion is given the rationale for the decision to alter for these experimenters the NTA film response factors from that normally applied to the LAMPF film badges

  10. Vibrational resonance: a study with high-order word-series averaging

    CERN Document Server

    Murua, Ander

    2016-01-01

    We study a model problem describing vibrational resonance by means of a high-order averaging technique based on so-called word series. With the tech- nique applied here, the tasks of constructing the averaged system and the associ- ated change of variables are divided into two parts. It is first necessary to build recursively a set of so-called word basis functions and, after that, all the required manipulations involve only scalar coefficients that are computed by means of sim- ple recursions. As distinct from the situation with other approaches, with word- series, high-order averaged systems may be derived without having to compute the associated change of variables. In the system considered here, the construction of high-order averaged systems makes it possible to obtain very precise approxima- tions to the true dynamics.

  11. A Framework for Control System Design Subject to Average Data-Rate Constraints

    DEFF Research Database (Denmark)

    Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2011-01-01

    This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be...... deployed in the feedback path. For this situation, and by focusing on a class of source-coding schemes built around entropy coded dithered quantizers, we develop a framework to deal with average data-rate constraints in a tractable manner that combines ideas from both information and control theories. As...... an illustration of the uses of our framework, we apply it to study the interplay between stability and average data-rates in the considered architecture. It is shown that the proposed class of coding schemes can achieve mean square stability at average data-rates that are, at most, 1.254 bits per...

  12. Level Crossing Rate and Average Fade Duration of EGC Systems with Cochannel Interference in Rayleigh Fading

    CERN Document Server

    Hadzi-Velkov, Zoran

    2009-01-01

    Both the first-order signal statistics (e.g. the outage probability) and the second-order signal statistics (e.g. the average level crossing rate, LCR, and the average fade duration, AFD) are important design criteria and performance measures for the wireless communication systems, including the equal gain combining (EGC) systems in presence of the cochannel interference (CCI). Although the analytical expressions for the outage probability of the coherent EGC systems exposed to CCI and various fading channels are already known, the respective ones for the average LCR and the AFD are not available in the literature. This paper presents such analytical expressions for the Rayleigh fading channel, which are obtained by utilizing a novel analytical approach that does not require the explicit expression for the joint PDF of the instantaneous output signal-to-interference ratio (SIR) and its time derivative. Applying the characteristic function method and the Beaulieu series, we determined the average LCR and the A...

  13. The effect of three-dimensional fields on bounce averaged particle drifts in a tokamak

    International Nuclear Information System (INIS)

    The impact of applied 3D magnetic fields on the bounce-averaged precessional drifts in a tokamak plasma are calculated. Local 3D MHD equilibrium theory is used to construct solutions to the equilibrium equations in the vicinity of a magnetic surface for a large aspect ratio circular tokamak perturbed by applied 3D fields. Due to modulations of the local shear caused by near-resonant Pfirsch-Schlüter currents, relatively weak applied 3D fields can have a large effect on trapped particle precessional drifts

  14. 77 FR 7237 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2012-02-10

    ... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY: In a... change in railroad productivity for the 2006-2010 (5-year) averaging period. This represents a...

  15. 49 CFR 531.6 - Measurement and calculation procedures.

    Science.gov (United States)

    2010-10-01

    ... the Act and set forth in 40 CFR part 600. (b) A manufacturer that is eligible to elect a model year in... 49 Transportation 6 2010-10-01 2010-10-01 false Measurement and calculation procedures. 531.6... STANDARDS § 531.6 Measurement and calculation procedures. (a) The average fuel economy of all...

  16. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang; changxue; Su; na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’ communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  17. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang changxue; Su na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  18. A Primer on Disseminating Applied Quantitative Research

    Science.gov (United States)

    Bell, Bethany A.; DiStefano, Christine; Morgan, Grant B.

    2010-01-01

    Transparency and replication are essential features of scientific inquiry, yet scientific communications of applied quantitative research are often lacking in much-needed procedural information. In an effort to promote researchers dissemination of their quantitative studies in a cohesive, detailed, and informative manner, the authors delineate…

  19. 40 CFR 60.266 - Test methods and procedures.

    Science.gov (United States)

    2010-07-01

    ... effluent gas from exhaust stream “i”, dscm/hr (dscf/hr). P=average furnace power input, MW. K=conversion... determine the average furnace power input (P) during each run. (4) Method 9 and the procedures in § 60.11....8, the owner or operator shall not allow gaseous diluents to be added to the effluent gas...

  20. Procedural Learning and Dyslexia

    Science.gov (United States)

    Nicolson, R. I.; Fawcett, A. J.; Brookes, R. L.; Needle, J.

    2010-01-01

    Three major "neural systems", specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed ("Trends Neurosci.",30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief…

  1. Coombs' Type Response Procedures.

    Science.gov (United States)

    Koehler, Roger A.

    This paper provides substantial evidence in favor of the continued use of conventional objective testing procedures in lieu of either the Coombs' cross-out technique or the Dressel and Schmid free-choice response procedure. From the studies presented in this paper, the tendency is for the cross-out and the free choice methods to yield a decrement…

  2. Average value of correlated time series, with applications in dendroclimatology and hydrometeorology

    Energy Technology Data Exchange (ETDEWEB)

    Wigley, T.M.L.; Briffa, K.R.; Jones, P.D.

    1984-02-01

    In a number of areas of applied climatology, time series are either averaged to enhance a common underlying signal or combined to produce area averages. How well, then, does the average of a finite number (N) of time series represent the population average, and how well will a subset of series represent the N-series average. We have answered these questions by deriving formulas for 1) the correlation coefficient between the average of N time series and the average of n such series (where n is an arbitrary subset of N) and 2) the correlation between the N-series average and the population. We refer to these mean correlations as the subsammple signal strength (SSS) and the expressed population signal (EPS). They may be expressed in terms of the mean interseries correlation coefficient r-barm as SSS = (R-bar/sub n/,N)/sup 2/roughly-equaln(1+(N-1)r-bar)/N(1+(n+1)r-bar), EPS = (R-bar/sub N/)/sup 2/roughly-equalNr-bar/1+(N-1)r-bar. Similar formulas are given relating these mean correlations to the fractional common variance which arises as a parameter in analysis of variance. These results are applied to determine the increased uncertainty in a tree-ring chronology which results when the number of cores used to produce the chronology is reduced. Such uncertainty will accrue to any climate reconstruction equation that is calibrated using the most recent part of the chronology. The method presented can be used to define the useful length of tree-ring chronologies for climate reconstruction work.

  3. Reforming Russian Civil Procedur

    Directory of Open Access Journals (Sweden)

    Dmitry Maleshin

    2016-03-01

    Full Text Available The II Annual Symposium of the journal Herald of Civil Procedure ‘2015: The Civil Procedure 2.0: Reform and Current State’ took place on October 9, 2015, at the Faculty of Law of Kazan (Volga region Federal University.The Symposium is now an established tradition for the University. In 2015 it brought together in Kazan eminent scholars of civil procedure from cities across the whole of Russia: Moscow, St. Petersburg, Saratov, Ekaterinburg, Omsk, Samara, Nizhnekamsk and others. This large-scale event attracted the attention not only of Russian scholars, but also of legal scholars from abroad: Elisabetta Silvestri (Professor, University of Pavia, Italy, William B. Simons (Professor, University of Tartu, Estonia, Jaroslav Turlukovsky (Professor, Warsaw University, Poland, Stuart H. Schultz (Practising Attorney, USA, Irina Izarova (Associate Professor, Taras Shevchenko National University of Kyiv, Ukraine.The opening ceremony of the Symposium began with greetings to all participants and best wishes for productive discussions. Participants were welcomed with remarks by Marat Khairullin, Deputy Chair of the Supreme Court of the Republic of Tatarstan, Radik Ilyasov, Head of the Federal Bailiff Service of the Republic of Tatarstan, and Ildar Tarkhanov, Academic Supervisor at the Faculty of Law. They expressed their appreciation for the great value of the journal Herald of Civil Procedure in the growth of the science of civil procedure and enforcement procedure, and for its contributions to the development of the judicial system of the Russian Federation.In addition to hearing prepared reports and discussing viewpoints on current issues of civil and arbitration procedure, participants attended presentations by representatives from procedural law periodicals in the frame of the Symposium. The Editor-in-Chief of Herald of Civil Procedure, Damir Valeev, and the Commercial Director of the Statut Publishing House (Moscow, Kirill Samoilov, presented new

  4. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    The average concentrations of 226Ra and 210Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226Ra and 210Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210Bi beta emissions with a Geiger Muller Counter. (M.A.C.)

  5. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  6. Advances in Applied Mechanics

    OpenAIRE

    2014-01-01

    Advances in Applied Mechanics draws together recent significant advances in various topics in applied mechanics. Published since 1948, Advances in Applied Mechanics aims to provide authoritative review articles on topics in the mechanical sciences, primarily of interest to scientists and engineers working in the various branches of mechanics, but also of interest to the many who use the results of investigations in mechanics in various application areas, such as aerospace, chemical, civil, en...

  7. Perspectives on Applied Ethics

    OpenAIRE

    2007-01-01

    Applied ethics is a growing, interdisciplinary field dealing with ethical problems in different areas of society. It includes for instance social and political ethics, computer ethics, medical ethics, bioethics, envi-ronmental ethics, business ethics, and it also relates to different forms of professional ethics. From the perspective of ethics, applied ethics is a specialisation in one area of ethics. From the perspective of social practice applying eth-ics is to focus on ethical aspects and ...

  8. Applied Neuroscience Laboratory Complex

    Data.gov (United States)

    Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....

  9. Toddler test or procedure preparation

    Science.gov (United States)

    Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... about the procedure to 5 or 10 minutes. Toddlers have a short attention span. Any preparation should ...

  10. On the incentive effects of damage averaging in tort law

    OpenAIRE

    Tim Friehe

    2007-01-01

    It has been generally accepted for unilateral-care models that care incentives are not affected by the use of either accurate damages or average damages if injurers lack knowledge of the precise damage level they might cause. This paper shows that in bilateral-care models with heterogeneous victims, consequences of averages as damage measure are critically dependent on the weighing of respective harm levels. Importantly, we establish that there is an average measure which allows the attainmen...

  11. Iterrative Correction of Measurement with Averaging of Dithered Samples

    Directory of Open Access Journals (Sweden)

    Miroslav Kamensky

    2008-01-01

    Full Text Available Self-calibration techniques could eliminate measurement errors caused by time changes and component aging. For ADC performance enhancement also averaging is necessary. In the paper the iterative measurement error correction method is presented in combination with averaging. Dither theory for Gaussian noise has been used for exhibition of averaging abilities in ADC characteristic improvement. Experimental ENOB value improvement is more than 1.5 bit.

  12. Average-Consensus Algorithms in a Deterministic Framework

    OpenAIRE

    Topley, Kevin; Krishnamurthy, Vikram

    2011-01-01

    We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus are proposed. Necessary and sufficient communication conditions are given for each algorithm to achieve average-consensus. Resource costs for each algorithm are derived based on the number of scalar values that are required for communication and ...

  13. Orbit-averaged Guiding-center Fokker-Planck Operator

    CERN Document Server

    Brizard, A J; Decker, J; Duthoit, F -X

    2009-01-01

    A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant $\\ov{\\psi}$, the minimum-B pitch-angle coordinate $\\xi_{0}$, and the momentum magnitude $p$.

  14. 31 CFR 205.23 - What requirements apply to estimates?

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to estimates... Treasury-State Agreement § 205.23 What requirements apply to estimates? The following requirements apply when we and a State negotiate a mutually agreed upon funds transfer procedure based on an estimate...

  15. Costing imaging procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bretland, P.M.

    1988-01-01

    A method of costing using a computer system with 512 Kb of memory has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. A cost spectrum for 1984-5 was established. Cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours, differing from some published figures, particularly those in the Koerner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the in-patient day. Total cost of the imaging service to the district was approx. 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa).

  16. Evaluation of the occupational dose in hemodynamic procedures

    International Nuclear Information System (INIS)

    The purpose of this study was to evaluate the dose received by health professionals in a hemodynamic service. It was necessary to know the profile of these professional, to carry out a survey the occupational external doses during the years 2000 to 2009 and to evaluate the distribution of the effective dose from the special procedures guided by fluoroscopy. A self-applied questionnaire was used to delineate the profile of health professionals, taking into account variables such as gender, age, individual monitoring time, number of jobs and tasks performed in the sector. In addition, it was performed an examination of the external individual monitoring doses from the records of the institution. The sample was composed of 35 professionals, 11 males and 24 females, with mean age of (43.0 +- 10.4) years. The average monitoring time of individuals analyzed within the institution was (11.3 +- 9.1) years, considering the period before the study. The minimum record dose level was 0.2 mSv and the maximum dose was 22.7 mSv. Doctors and nursing assistants were the professionals more exposed to radiation, due probably remaining closer to the examination table and X-ray tube during the interventional procedure. (author)

  17. Applied Linguistics: Brazilian Perspectives

    Science.gov (United States)

    Cavalcanti, Marilda C.

    2004-01-01

    The aim of this paper is to present perspectives in Applied Linguistics (AL) against the background of a historical overview of the field in Brazil. I take the stance of looking at AL as a field of knowledge and as a professional area of research. This point of view directs my reflections towards research-based Applied Linguistics carried out from…

  18. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  19. A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport

    Directory of Open Access Journals (Sweden)

    Gilberto Espinosa-Paredes

    2012-01-01

    Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.

  20. Variational theory of average-atom and superconfigurations in quantum plasmas.

    Science.gov (United States)

    Blenski, T; Cichocki, B

    2007-05-01

    Models of screened ions in equilibrium plasmas with all quantum electrons are important in opacity and equation of state calculations. Although such models have to be derived from variational principles, up to now existing models have not been fully variational. In this paper a fully variational theory respecting virial theorem is proposed-all variables are variational except the parameters defining the equilibrium, i.e., the temperature T, the ion density ni and the atomic number Z. The theory is applied to the quasiclassical Thomas-Fermi (TF) atom, the quantum average atom (QAA), and the superconfigurations (SC) in plasmas. Both the self-consistent-field (SCF) equations for the electronic structure and the condition for the mean ionization Z* are found from minimization of a thermodynamic potential. This potential is constructed using the cluster expansion of the plasma free energy from which the zero and the first-order terms are retained. In the zero order the free energy per ion is that of the quantum homogeneous plasma of an unknown free-electron density n0 = Z* ni occupying the volume 1/ni. In the first order, ions submerged in this plasma are considered and local neutrality is assumed. These ions are considered in the infinite space without imposing the neutrality of the Wigner-Seitz (WS) cell. As in the Inferno model, a central cavity of a radius R is introduced, however, the value of R is unknown a priori. The charge density due to noncentral ions is zero inside the cavity and equals en0 outside. The first-order contribution to free energy per ion is the difference between the free energy of the system "central ion+infinite plasma" and the free energy of the system "infinite plasma." An important part of the approach is an "ionization model" (IM), which is a relation between the mean ionization charge Z* and the first-order structure variables. Both the IM and the local neutrality are respected in the minimization procedure. The correct IM in the TF case

  1. Applied tensor stereology

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel

    In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle s...

  2. New empirical formulae for calculation of average M-shell fluorescence yields

    International Nuclear Information System (INIS)

    We have calculated and reviewed in a table form the average bulk M-shell fluorescence yield previously measured by different groups covering the period from 1955 to 2005. We have interpolated the weighted and unweighted mean values of the experimental data by using the analytical function (ω¯M/(1−ω¯M))1/4 as a function of the atomic number (Z) to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. Also, we used the famous formula ω¯M=A(Z−13)4 to generalize the average M-shell fluorescence yield for elements with 19≤Z≤100. The results have been compared with other theoretical, experimental and empirical values reported in the literature and a reasonable agreement has been obtained. - Highlights: • The collected average M-shell fluorescence yields cover the period from 1955 to 2005. • Three procedures are followed to deduce the average M-shell fluorescence yield. • The obtained results are compared with others works. • Reasonable agreement was typically obtained between our result and other works

  3. Calibration and verification of surface contamination meters --- Procedures and techniques

    International Nuclear Information System (INIS)

    A standardised measurement procedure for surface contamination meters (SCM) is presented. The procedure aims at rendering surface contamination measurements to be simply and safely interpretable. Essential for the approach is the introduction and common use of the radionuclide specific quantity 'guideline value' specified in the Swiss Radiation Protection Ordinance as unit for the measurement of surface activity. The according radionuclide specific 'guideline value count rate' can be summarized as verification reference value for a group of radionuclides ('basis guideline value count rate'). The concept can be generalized for SCM of the same type or for SCM of different types using he same principle of detection. A SCM multi source calibration technique is applied for the determination of the instrument efficiency. Four different electron radiation energy regions, four different photon radiation energy regions and an alpha radiation energy region are represented by a set of calibration sources built according to ISO standard 8769-2. A guideline value count rate representing the activity per unit area of a surface contamination of one guideline value can be calculated for any radionuclide using instrument efficiency, radionuclide decay data, contamination source efficiency, guideline value averaging area (100 cm2), and radionuclide specific guideline value. n this way, instrument responses for the evaluation of surface contaminations are obtained for radionuclides without available calibration sources as well as for short-Iived radionuclides, for which the continuous replacement of certified calibration sources can lead to unreasonable costs. SCM verification is based on surface emission rates of reference sources with an active area of 100 cm2. The verification for a given list of radionuclides is based on the radionuclide specific quantity guideline value count rate. Guideline value count rates for groups of radionuclides can be represented within the maximum

  4. Procedures for Sampling Vegetation

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This report outlines vegetation sampling procedures used on various refuges in Region 3. The importance of sampling the response of marsh vegetation to management...

  5. Dynamic alarm response procedures

    International Nuclear Information System (INIS)

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as ApacheR, IISR, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as NetscapeR, Microsoft Internet ExplorerR, Mozilla FirefoxR, OperaR, and others. (authors)

  6. Tests and Procedures

    Science.gov (United States)

    ... procedure is being done. How the results will influence treatment. What your child will experience during the ... Understanding Children’s Cancer About Cancer What is Cancer? Childhood Cancer Statistics Childhood Cancer Statistics Overview Number of ...

  7. Cosmetic Procedure Questions

    Science.gov (United States)

    ... How to Choose the Best Skin Care Products Cosmetic Procedure Questions Want to look younger? Start by ... fillers, neuromodulators (Botox) and hair restoration among others. Cosmetic Questionnaire Print out this PDF version to take ...

  8. IC Treatment: Surgical Procedures

    Science.gov (United States)

    ... surgeon fashions a tube or conduit from a short section of bowel and places the ureters (which carry urine from ... this procedure, some patients will continue to experience symptoms of ... augmented bowel segment of these newly fashioned bladders. Some patients ...

  9. Anxiety Around Medical Procedures

    Science.gov (United States)

    ... Kidney/Wilms Tumor Liver Cancer Lymphoma (Non-Hodgkin) Lymphoma (Hodgkin) Neuroblastoma Osteosarcoma Retinoblastoma Rhabdomyosarcoma Skin Cancer Soft Tissue Sarcoma Thyroid Cancer Understanding Children's Cancer Anxiety Around Procedures Childhood Cancer Statistics Late ...

  10. Simulation of Synthetic Jets in Quiescent Air Using Unsteady Reynolds Averaged Navier-Stokes Equations

    Science.gov (United States)

    Vatsa, Veer N.; Turkel, Eli

    2006-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.

  11. Marginal Cost Versus Average Cost Pricing with Climatic Shocks in Senegal: A Dynamic Computable General Equilibrium Model Applied to Water

    OpenAIRE

    Briand, Anne

    2006-01-01

    The model simulates on a 20-year horizon, a first phase of increase in the water resource availability taking into account the supply policies by the Senegalese government and a second phase with hydrologic deficits due to demand evolution (demographic growth). The results show that marginal cost water pricing (with a subsidy ensuring the survival of the water production sector) makes it possible in the long term to absorb the shock of the resource shortage, GDP, investment and welfare increa...

  12. Costing imaging procedures.

    Science.gov (United States)

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241

  13. A Characterization of the average tree solution for tree games

    OpenAIRE

    Debasis Mishra; Dolf Talman

    2009-01-01

    For the class of tree games, a new solution called the average tree solution has been proposed recently. We provide a characterization of this solution. This characterization underlines an important difference, in terms of symmetric treatment of the agents, between the average tree solution and the Myerson value for the class of tree games.

  14. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... appendix A. Mj=Molecular weight of organic HAP j, gram per gram-mole. n=Number of organic HAP's in the... at the rack during the month, kilopascals. M=Weighted average molecular weight of organic HAP's... rack i to calculate the weighted average rack molecular weight: ER18AU95.008 where: Mj=Molecular...

  15. 40 CFR 63.150 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ..., appendix A. Mj=Molecular weight of organic HAP j, gram per gram-mole. n=Number of organic HAP's. (A) The..., kilopascals. M = Weighted average molecular weight of organic HAP's transferred at the transfer rack during... transfer rack i to calculate the weighted average rack molecular weight: ER22AP94.267 where: Mj =......

  16. 40 CFR 63.503 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Method 18 or Method 25A of 40 CFR part 60, appendix A. Mj=Molecular weight of organic HAP j, gram per... demonstrate compliance, the number of emission points allowed to be included in the emission average is... demonstrate compliance, the number of emission points allowed in the emissions average for those...

  17. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  18. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  19. Risk quantification in deterministic procedures: outage key safety functions evaluation using probabilistic risk assessment

    International Nuclear Information System (INIS)

    In 2007 a collaboration agreement between the Nuclear Industry and the Nuclear Engineering Research Group of the Technical University of Catalonia (NERG-UPC) began on the Risk-Informed Operation Decision Management field. A Manual has been created; it provides a guidance of the systematic of using Probabilistic Risk Assessment for the evaluation of guides or procedures which ensure the compliment of Outage Key Safety Functions (OKSF) in nuclear power plants. The evaluation base is probabilistic; the core damage frequencies (CDF) values determine the acceptance of the analyzed guides and encourage suggestions for improvements in the contained procedures. Acceptance criteria from Plants Supervision System are used throw out quantifications. The developed methodology pretends to evaluate the risk associated to the availability system configurations during the outage. The documents used in the development have been: PRA, OKSF procedure and Performance Technical Specifications (PTS) from a 3 loops Westinghouse PWR. As a pilot experience, the methodology has been applied to the 3. and 13. Operational Plant State (OPS), always within the operational mode 4. Some conclusions of the analysis: The analyzed procedure requires the operability of just one charge pump as boric acid supply source. PRA gives a CDF increase (ΔCDF) of 1,19.10-6 year-1 for the pump in standby, consequently, an exposure time T= 53,6 hours. Given an average time for the OPS of 40 hours, it is concluded the correct treatment of the procedure. However, it could be improved with the inclusion of an additional inventory replacement function. This would limit the charge pump unavailability. On the other hand, the availability of the external electrical sources is ratified. The procedure requires the operability of both supplies during the OPS. The unavailability of one of them (transformer fail) involves a ΔCDF equal to 1,64.10-5 year-1 and a T= 3,89 hours. Then, it is considered appropriate the treatment

  20. Partial Averaging Near a Resonance in Planetary Dynamics

    CERN Document Server

    Haghighipour, N

    1999-01-01

    Following the general numerical analysis of Melita and Woolfson (1996), I showed in a recent paper that a restricted, planar, circular planetary system consisting of Sun, Jupiter and Saturn would be captured in a near (2:1) resonance when one would allow for frictional dissipation due to interplanetary medium (Haghighipour, 1998). In order to analytically explain this resonance phenomenon, the method of partial averaging near a resonance was utilized and the dynamics of the first-order partially averaged system at resonance was studied. Although in this manner, the finding that resonance lock occurs for all initial relative positions of Jupiter and Saturn was confirmed, the first-order partially averaged system at resonance did not provide a complete picture of the evolutionary dynamics of the system and the similarity between the dynamical behavior of the averaged system and the main planetary system held only for short time intervals. To overcome these limitations, the method of partial averaging near a res...