WorldWideScience

Sample records for average procedures applied

  1. An averaging procedure for applying the Revised Universal Soil Loss Equation (RUSLE) to disturbed mountain watersheds

    OpenAIRE

    González Bonorino, G.; Osterkamp, W. R.; Colombo Piñol, Ferrán

    2002-01-01

    Disturbed lands in mountain watersheds may be a significant source of sediment. A systematic rating of their potential for erosion would be useful in soil conservation planning. RUSLE is a successful erosion-prediction technique, well tested on gentle slopes of agricultural lands. In view of its success, attempts have been made to apply RUSLE to areas of complex topography by substituting upstream contributing area for the linear-flow model embodied in the RUSLE L-factor. This substitution le...

  2. Averaging procedure in variable-G cosmologies

    CERN Document Server

    Cardone, Vincenzo F

    2008-01-01

    Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the non-perturbative renormalization program for quantum gravity based upon the Einstein--Hilbert action. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and all equations involving contributions of a variable Newton parameter are worked out in detail. Interestingly, under suitable assumptions, an approximate solution can be found where the universe tends to a FLRW model, while keeping track of the original inhomogeneities through two effective fluids.

  3. A procedure to average 3D anatomical structures.

    Science.gov (United States)

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  4. Maximum Average SAR Measurement Procedure for Multi-Antenna Transmitters

    Science.gov (United States)

    Iyama, Takahiro; Onishi, Teruo

    This paper proposes and verifies a specific absorption rate (SAR) measurement procedure for multi-antenna transmitters that requires measurement of two-dimensional electric field distributions for the number of antennas and calculation in order to obtain the three-dimensional SAR distributions for arbitrary weighting coefficients of the antennas prior to determining the average SAR. The proposed procedure is verified based on Finite-Difference Time-Domain (FDTD) calculation and measurement using electro-optic (EO) probes. For two reference dipoles, the differences in the 10g SAR obtained based on the proposed procedure compared numerically and experimentally to that based on the original calculated three-dimensional SAR distribution are at most 4.8% and 3.6%, respectively, at 1950MHz. At 3500MHz, this difference is at most 5.2% in the numerical verification.

  5. Effects of measurement procedure and equipment on average room acoustic measurements

    DEFF Research Database (Denmark)

    Gade, Anders Christian; Bradley, J S; Siebein, G W

    1993-01-01

    . In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences observed...

  6. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Science.gov (United States)

    Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.

    2007-12-01

    We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].

  7. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Heneghan C

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as -means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of and specificity of .

  8. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  9. Robust numerical methods for conservation laws using a biased averaging procedure

    Science.gov (United States)

    Choi, Hwajeong

    In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.

  10. Applying computer-based procedures in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro V. de; Carvalho, Paulo V.R. de; Santos, Isaac J.A.L. dos; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana], e-mail: mvitor@ien.gov.br, e-mail: paulov@ien.gov.br, e-mail: luquetti@ien.gov.br, e-mail: grecco@ien.gov.br; Bruno, Diego S. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Curso de Engenharia de Controle e Automacao], e-mail: diegosalomonebruno@gmail.com

    2009-07-01

    Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)

  11. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  12. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  13. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  14. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling☆

    Science.gov (United States)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2012-01-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  15. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  16. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  17. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  18. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.

    Science.gov (United States)

    Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets

  19. Average-passage simulation of counter-rotating propfan propulsion systems as applied to cruise missiles

    Science.gov (United States)

    Mulac, Richard A.; Schneider, Jon C.; Adamczyk, John J.

    1989-01-01

    Counter-rotating propfan (CRP) propulsion technologies are currently being evaluated as cruise missile propulsion systems. The aerodynamic integration concerns associated with this application are being addressed through the computational modeling of the missile body-propfan flowfield interactions. The work described in this paper consists of a detailed analysis of the aerodynamic interactions between the control surfaces and the propfan blades through the solution of the average-passage equation system. Two baseline configurations were studied, the control fins mounted forward of the counter-rotating propeller and the control fins mounted aft of the counter-rotating propeller. In both cases, control fin-propfan separation distance and control fin deflection angle were varied.

  20. State-averaged Monte Carlo configuration interaction applied to electronically excited states

    CERN Document Server

    Coe, J P

    2014-01-01

    We introduce state-averaging into the method of Monte Carlo configuration interaction (SA-MCCI) to allow the stable and efficient calculation of excited states. We show that excited potential curves for H$_{3}$, including a crossing with the ground state, can be accurately reproduced using a small fraction of the FCI space. A recently introduced error measure for potential curves [J. P. Coe and M. J. Paterson, J. Chem. Phys., 137, 204108 (2012)] is shown to also be a fair approach when considering potential curves for multiple states. We demonstrate that potential curves for LiF using SA-MCCI agree well with the FCI results and the avoided crossing occurs correctly. The seam of conical intersections for CH$_{2}$ found by Yarkony [J. Chem. Phys., 104, 2932 (1996)] is used as a test for SA-MCCI and we compare potential curves from SA-MCCI with FCI results for this system for the first three triplet states. We then demonstrate the improvement from using SA-MCCI on the dipole of the $2$ $^{1}A_{1}$ state of carbo...

  1. 34 CFR 370.43 - What requirement applies to the use of mediation procedures?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures...

  2. Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process

    Science.gov (United States)

    Motley, Albert E., III

    2000-01-01

    One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.

  3. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration.

    Science.gov (United States)

    Collignan, Bernard; Powaga, Emilie

    2014-11-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. PMID:25011073

  4. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration.

    Science.gov (United States)

    Collignan, Bernard; Powaga, Emilie

    2014-11-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings.

  5. 21 CFR 1315.22 - Procedure for applying for individual manufacturing quotas.

    Science.gov (United States)

    2010-04-01

    ... manufacturing quotas. 1315.22 Section 1315.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF... Individual Manufacturing Quotas § 1315.22 Procedure for applying for individual manufacturing quotas. Any... desires to manufacture a quantity of the chemical must apply on DEA Form 189 for a manufacturing quota...

  6. A Flexible Boundary Procedure for Hyperbolic Problems: Multiple Penalty Terms Applied in a Domain

    OpenAIRE

    Nordström, Jan; Abbas, Qaisar; Erickson, Brittany A.; Frenander, Hannes

    2014-01-01

    A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied near boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new boundary...

  7. A Flexible Far Field Boundary Procedure for Hyperbolic Problems: Multiple Penalty Terms Applied in a Domain

    OpenAIRE

    Nordström, Jan; Abbas, Qaisar; A. Erickson, Brittany; Frenander, Hannes

    2013-01-01

    A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied at far field boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new ...

  8. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT

    International Nuclear Information System (INIS)

    In this document the quality control procedures applied to the CMS muon drift chambers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chamber test handbook for beginners. (Author) 3 refs

  9. Influence of the surface averaging procedure of the current density in assessing compliance with the ICNIRP low-frequency basic restrictions by means of numerical techniques

    Science.gov (United States)

    Zoppetti, N.; Andreuccetti, D.

    2009-08-01

    Although the calculation of the surface average of the low-frequency current density distribution over a cross-section of 1 cm2 is required by ICNIRP guidelines, no reference averaging algorithm is indicated, neither in the ICNIRP guidelines nor in the Directive 2004/40/EC that is based on them. The lack of a general standard algorithm that fulfils the ICNIRP guidelines' requirements is particularly critical in the prospective of the 2004/40/EC Directive endorsement, since the compliance to normative limits refers to well-defined procedures. In this paper, two case studies are considered, in which the calculation of the surface average is performed using a simplified approach widely used in the literature and an original averaging procedure. This analysis, aimed at quantifying the expected differences and to single out their sources, shows that the choice of the averaging algorithm represents an important source of uncertainty in the application of the guideline requirements.

  10. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    Science.gov (United States)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what

  11. 13 CFR 124.1010 - What procedures apply to disadvantaged status protests?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What procedures apply to disadvantaged status protests? 124.1010 Section 124.1010 Business Credit and Assistance SMALL BUSINESS..., Certification, and Protests Relating to Federal Small Disadvantaged Business Programs § 124.1010 What...

  12. 18 CFR 284.502 - Procedures for applying for market-based rates.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Procedures for applying for market-based rates. 284.502 Section 284.502 Conservation of Power and Water Resources FEDERAL... POLICY ACT OF 1978 AND RELATED AUTHORITIES Applications for Market-Based Rates for Storage §...

  13. 21 CFR 1303.22 - Procedure for applying for individual manufacturing quotas.

    Science.gov (United States)

    2010-04-01

    ... manufacturing quotas. 1303.22 Section 1303.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE QUOTAS Individual Manufacturing Quotas § 1303.22 Procedure for applying for individual manufacturing quotas. Any person who is registered to manufacture any basic class of controlled substance...

  14. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    Science.gov (United States)

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  15. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    Science.gov (United States)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  16. Computational Comminution and Its Key Technologies Applied to Materials Processing Procedure

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new concept named computational comminution is proposed in this paper, which is different from the traditional studies on materials processing procedure such as the study based on theoretic models, the study based on experiment models, which is based on information models. Some key technologies applied to materials processing procedure such as artificial neural networks, fuzzy sets, genetic algorithms and visualization technology are also presented, and a fusing methodology of these new technologies is studied. Application in the cement grinding process of Horomill shows that results in this paper are efficient.

  17. Applying the conventional moving average filter for estimation of low radiation doses using EPR spectroscopy: Benefits and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Radiation Dosimetry Department, Ministry of Scientific Research, Tersa Street, P.O. Box 136, Giza, Haram 12211 (Egypt); Physics Department, Faculty of Science and Humanities, Salman Bin AbdulAziz University, Alkharj (Saudi Arabia)

    2014-02-11

    Alanine/EPR is the most common dosimetry system for high radiation doses because of its high stability and wide linear response, however, use of alanine in most of medical applications still require special sophisticated methodologies and techniques in order to extend alanine detection limit to low levels of radiation doses. One of these techniques is the use of digital processing of acquired alanine spectra for enhancing useful components in spectra while useless features are suppressed. Simple moving average filter (MA) impacts on alanine EPR spectra have been studied in terms of peak-to-peak height, peak-to-peak line width, and associated uncertainty. Three types of the used filter were investigated: upward MA, central MA, and downward MA filters, effects of each on the peak position for different values of filter width were studied. It was found that MA filter always lead to the reduction in signal intensity and the increase of line width of the central peak of alanine spectrum. Peak position also changes in cases of the upward MA and downward MA filters while no significant changes were observed in the case of central MA. Uncertainties associated to the averaging process were evaluated and plotted versus the filter width resulting in a linear relationship. Filter width value should be carefully selected in order to avoid probable distortion in processed spectra while gaining less noisy spectra with less associated uncertainties.

  18. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    structure evaluation by assessing the local identifiability characteristics of the parameters. Moreover, such a procedure should be generic to make sure it can be applied independent from the structure of the model. We hereby apply a numerical identifiability approach which is based on the work of Walter...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring....... In contrast, the practical identifiability analysis revealed that high values of the forward rate parameter Vf led to identifiability problems. These problems were even more pronounced athigher substrate concentrations, which illustrates the importance of a proper experimental designto avoid...

  19. Containment integrity and leak testing. Procedures applied and experiences gained in European countries

    International Nuclear Information System (INIS)

    Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries

  20. The Safety Assessment of OPR-1000 for Station Blackout Applying Combined Deterministic and Probabilistic Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dong Gu; Ahn, Seung-Hoon; Cho, Dae-Hyung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    This is termed station blackout (SBO). However, it does not generally include the loss of available AC power to safety buses fed by station batteries through inverters or by alternate AC sources. Historically, risk analysis results have indicated that SBO was a significant contributor to overall core damage frequency. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident, which is a typical beyond design basis accident and important contributor to overall plant risk, is performed by applying the combined deterministic and probabilistic procedure (CDPP). In addition, discussions are made for reevaluation of SBO risk at OPR-1000 by eliminating excessive conservatism in existing PSA. The safety assessment of OPR-1000 for SBO accident, which is a typical BDBA and significant contributor to overall plant risk, was performed by applying the combined deterministic and probabilistic procedure. However, the reference analysis showed that the CDF and CCDP did not meet the acceptable risk, and it was confirmed that the SBO risk should be reevaluated. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it was demonstrated that the proposed CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.

  1. Validation procedures of software applied in nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    The IAEA has supported the availability of well functioning nuclear instruments in Member States over more than three decades. Some older or aged instruments are still being used and are still in good working condition. However, those instruments may not meet modern software requirements for the end-user in all cases. Therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. New advanced software is not only applied in case of new instrumentation, but often also for new and improved applications of modernized and/or refurbished instruments in many Member States for which in few cases the IAEA also provided support. Modern software applied in nuclear instrumentation plays a key role for their safe operation and execution of commands in a user friendly manner. Correct data handling and transfer has to be ensured. Additional features such as data visualization, interfacing to PC for control and data storage are often included. To finalize the task, where new instrumentation which is not commercially available is used, or aged instruments are modernized/refurbished, the applied software has to be verified and validated. A Technical Meeting on 'Validation Procedures of Software Applied in Nuclear Instruments' was organized in Vienna, 20-23 November 2006, to discuss the verification and validation process of software applied to operation and use of nuclear instruments. The presentations at the technical meeting included valuable information, which has been compiled and summarized in this publication, which should be useful for technical staff in Member States when modernizing/refurbishing nuclear instruments. 22 experts in the field of modernization/refurbishment of nuclear instruments as well as users of applied software presented their latest results. Discussion sessions followed the presentations. This publication is the outcome of deliberations during the meeting

  2. Robust solution procedure for the discrete energy-averaged model on the calculation of 3D hysteretic magnetization and magnetostriction of iron–gallium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Tari, H., E-mail: tari.1@osu.edu; Scheidler, J.J., E-mail: scheidler.8@osu.edu; Dapino, M.J., E-mail: dapino.1@osu.edu

    2015-06-15

    A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data. - Highlights: • The discrete energy-averaged model for Galfenol is reformulated. • An analytical solution for 3D magnetostriction and magnetization is developed from eigenvalue decomposition. • Improved robustness is achieved. • An efficient optimization routine is developed to identify parameters from averaged hysteresis curves. • The effectiveness of the model is demonstrated against experimental data.

  3. Quality control procedures applied to nuclear instruments. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    Quality Control (QC), test procedures for Nuclear Instrumentation are important for assurance of proper and safe operation of the instruments, especially with regard to equipment related to radiological safety, human health and national safety. Correct measurements of radiation parameters must be ensured, i.e., accurate measurement of the number of radioactive events, counting times and in some cases accurate measurements of the radiation energy and occurring time of the nuclear events. There are several kinds of testing on nuclear instruments, for example, type-testing done by suppliers, acceptance testing made by the end users, Quality Control tests after repair and Quality Assurance/Quality Controls tests made by end-users. All of these tests are based in many cases on practical guidelines or on the experience of the own specialist, the available standards on this topic also need to be adapted to specific instruments. The IAEA has provided nuclear instruments and supported the operational maintenance efforts of the Member States. Although Nuclear Instrumentation is continuously upgraded, some older or aged instruments are still in use and in good working condition. Some of these instruments may not, however, meet modern requirements for the end-user therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. As a result, new instrumentation which is not commercially available, or modernized/refurbished instruments, need to be tested or verified with QC procedures to meet national or international certification requirements. A technical meeting on QC procedures applied to nuclear instruments was organized in Vienna from 23 to 24 August 2007. Existing and required QC test procedures necessary for the verification of operation and measurement of the main characteristics of nuclear instruments was the focus of discussion at this meeting. Presentations made at the technical meeting provided

  4. Evaluation of the BCR sequential extraction procedure applied for two unpolluted Spanish soils

    International Nuclear Information System (INIS)

    The procedure of BCR sequential extraction has been applied to five samples from two unpolluted soils in southern Spain. Total concentrations of different elements have been calculated as the sum of the three fractions of BCR and the residue has been measured for each. Also, a total analysis based on INAA or total-digestion techniques has been performed for the same samples. BCR and total analysis closely agreed for As, Pb and Cd. For Cu, Co, Cr and Zn the comparison of the results did not provide definitive conclusions concerning the capability of BCR in measuring total concentrations. On the other hand, in these cases, a certain correlation was found between the concentrations measured and some soil characteristics, especially the clay, organic-matter and CaCO3 contents. BCR proved incapable of providing accurate measurements for Ni

  5. Simplified procedures for applying the polymerase chain reaction to routinely fixed paraffin wax sections.

    Science.gov (United States)

    Coates, P J; d'Ardenne, A J; Khan, G; Kangro, H O; Slavin, G

    1991-02-01

    The polymerase chain reaction was applied to the analysis of DNA contained in archival paraffin wax embedded material. DNA suitable for the reaction was obtained from these tissues by simple extraction methods, without previous dewaxing of tissue sections. When compared with unfixed material, the reaction efficiency was compromised, so that an increased number of amplification cycles were required to produce equivalent amounts of amplified product. This in turn led to an increase in amplification artefacts, which can be minimised by a simple modification of the standard reaction. Amplification of relatively large DNA fragments was not always successful, and it seems prudent to bear this in mind when designing oligonucleotide primers which are to be used for the amplification of archival material. The efficiency of the procedure can be improved by dividing the amplification cycles into two parts: this reduces the amount of reagent needed, is relatively simple and inexpensive, and can be performed in one working day.

  6. 42 CFR 59.6 - What procedures apply to assure the suitability of informational and educational material?

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false What procedures apply to assure the suitability of informational and educational material? 59.6 Section 59.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GRANTS GRANTS FOR FAMILY PLANNING SERVICES Project Grants for Family Planning Services § 59.6 What procedures...

  7. A diagnostic procedure for applying the social-ecological systems framework in diverse cases

    Directory of Open Access Journals (Sweden)

    Jochen Hinkel

    2015-03-01

    Full Text Available The framework for analyzing sustainability of social-ecological systems (SES framework of Elinor Ostrom is a multitier collection of concepts and variables that have proven to be relevant for understanding outcomes in diverse SES. The first tier of this framework includes the concepts resource system (RS and resource units (RU, which are then further characterized through lower tier variables such as clarity of system boundaries and mobility. The long-term goal of framework development is to derive conclusions about which combinations of variables explain outcomes across diverse types of SES. This will only be possible if the concepts and variables of the framework can be made operational unambiguously for the different types of SES, which, however, remains a challenge. Reasons for this are that case studies examine other types of RS than those for which the framework has been developed or consider RS for which different actors obtain different kinds of RU. We explore these difficulties and relate them to antecedent work on common-pool resources and public goods. We propose a diagnostic procedure which resolves some of these difficulties by establishing a sequence of questions that facilitate the step-wise and unambiguous application of the SES framework to a given case. The questions relate to the actors benefiting from the SES, the collective goods involved in the generation of those benefits, and the action situations in which the collective goods are provided and appropriated. We illustrate the diagnostic procedure for four case studies in the context of irrigated agriculture in New Mexico, common property meadows in the Swiss Alps, recreational fishery in Germany, and energy regions in Austria. We conclude that the current SES framework has limitations when applied to complex, multiuse SES, because it does not sufficiently capture the actor interdependencies introduced through RS and RU characteristics and dynamics.

  8. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    Science.gov (United States)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  9. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    Science.gov (United States)

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  10. GRUKON - A package of applied computer programs system input and operating procedures of functional modules

    International Nuclear Information System (INIS)

    This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. 41 CFR 101-6.2106 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true What procedures apply to the selection of programs and activities under these regulations? 101-6.2106 Section 101-6.2106 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY...

  13. Neutron resonance averaging

    International Nuclear Information System (INIS)

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  14. Calculation of the information content of retrieval procedures applied to mass spectral data bases

    NARCIS (Netherlands)

    Marlen, G. van; Dijkstra, Auke; Klooster, H.A. van 't

    1979-01-01

    A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity o

  15. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  16. Quaternion Averaging

    Science.gov (United States)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  17. Sequential procedure for the design of checklists applied to the patient safety

    Directory of Open Access Journals (Sweden)

    Pardal-Refoyo JL

    2014-07-01

    Full Text Available Introduction: Checklists are cognitive mnemonic aid to guide in performing complex tasks under stress or fatigue, reduce errors of omission and identify critical incidents function. There is a lack of specific methodological aid for their processing. Objective: The aim of the study was to design a structured development of checklists applied to patient safety process (PS. Material and methods: Systematic review. Ten papers were selected, five related to the structure of the checklists, three related to PS research methods (root cause analysis -RCA- and failure mode and effects analysis -FMEA-, one related to construction indicators and one with consensus methods. Results: A sequential process in 15 steps was designed to help the development of LV applied to the SP collecting elements proposed in the literature reviewed. Conclusions: The development of LV SP applied to a particular process should follow a sequential model which includes the literature review, the ACR and FMEA methods and consensus.

  18. Spatial Data Quality Control Procedure applied to the Okavango Basin Information System

    Science.gov (United States)

    Butchart-Kuhlmann, Daniel

    2014-05-01

    Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.

  19. Current LC-MS methods and procedures applied to the identification of new steroid metabolites.

    Science.gov (United States)

    Marcos, Josep; Pozo, Oscar J

    2016-09-01

    The study of the metabolism of steroids has a long history; from the first characterizations of the major metabolites of steroidal hormones in the pre-chromatographic era, to the latest discoveries of new forms of excretions. The introduction of mass spectrometers coupled to gas chromatography at the end of the 1960's represented a major breakthrough for the elucidation of new metabolites. In the last two decades, this technique is being complemented by the use of liquid chromatography-mass spectrometry (LC-MS). In addition of becoming fundamental in clinical steroid determinations due to its excellent specificity, throughput and sensitivity, LC-MS has emerged as an exceptional tool for the discovery of new steroid metabolites. The aim of the present review is to provide an overview of the current LC-MS procedures used in the quest of novel metabolic products of steroidal hormones and exogenous steroids. Several aspects regarding LC separations are first outlined, followed by a description of the key processes that take place in the mass spectrometric analysis, i.e. the ionization of the steroids in the source and the fragmentation of the selected precursor ions in the collision cell. The different analyzers and approaches employed together with representative examples of each of them are described. Special emphasis is placed on triple quadrupole analyzers (LC-MS/MS), since they are the most commonly employed. Examples on the use of precursor ion scan, neutral loss scan and theoretical selected reaction monitoring strategies are also explained. PMID:26709140

  20. HIGH QUALITY ENVIRONMENTAL PRINCIPLES APPLIED TO THE ARCHITECTONIC DESIGN SELECTION PROCEDURE: THE NUTRE LAB CASE

    Directory of Open Access Journals (Sweden)

    Claudia Barroso Krause

    2012-06-01

    Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.

  1. Implementation of procedures for kilovoltage evaluation applied to dental X ray system

    International Nuclear Information System (INIS)

    In this work measurements were done in order to evaluate the accuracy and the precision of the voltage applied to a X rays tube, as well as its variation with distance. A dental X ray system with nominal voltage of 70 kV was used, and a portable kV digital measurer calibrated by the IEE/USP was also utilized. The kV obtained results presented a variation of 9.7% in accuracy and 1.6% in the precision. The results obtained for the distance variation showed only 0.6% of deviation, considering the kVp values obtained. The results are in accordance with the minimum values recommended by Portaria Federal 453 from the Ministerio da Saude. (author)

  2. Porous chitosan scaffold cross-linked by chemical and natural procedure applied to investigate cell regeneration

    International Nuclear Information System (INIS)

    Highlights: ► Polymeric scaffolds, made from chitosan-based films fixed by chemical (citrate) or natural method (genipin), were developed. ► Nano-indentation with a constant harmonic frequency was applied on porous scaffolds to explore their surface mechanics. ► The relationship between surface mechanical property and cell-surface interactions of scaffold materials was demonstrated. ► Porous scaffolds cross-linked by genipin showed adequate cell affinity, non-toxicity, and suitable mechanical properties. - Abstract: Porous chitosan scaffold is used for tissue engineering and drug delivery, but is limited as a scaffold material due to its mechanical weakness, which restrains cell adhesion on the surface. In this study, a chemical reagent (citrate) and a natural reagent (genipin) are used as cross-linkers for the formation of chitosan-based films. Nanoindentation technique with a continuous stiffness measurement system is particularly applied on the porous scaffold surface to examine the characteristic modulus and nanohardness of a porous scaffold surface. The characteristic modulus of a genipin-cross-linked chitosan surface is ≈2.325 GPa, which is significantly higher than that of an uncross-linked one (≈1.292 GPa). The cell-scaffold surface interaction is assessed. The cell morphology and results of an MTS assay of 3T3-fibroblast cells of a genipin-cross-linked chitosan surface indicate that the enhancement of mechanical properties induced cell adhesion and proliferation on the modified porous scaffold surface. The pore size and mechanical properties of porous chitosan film can be tuned for specific applications such as tissue regeneration.

  3. Applying radiation safety standards in diagnostic radiology and interventional procedures using x rays

    International Nuclear Information System (INIS)

    The International Basic Safety Standards for Protection against Ionizing Radiation and for the Safety of Radiation Sources (BSS) cover the application of ionizing radiation for all practices and interventions and are, therefore, basic and general in nature. Users of radiation sources have to apply those basic requirements to their own particular practices. That requires a degree of 'interpretation' by the user, which can result in varying levels of regulatory compliance and inconsistencies between applications of the BSS to similar practices. In this context, the Preamble of the BSS states that: 'The [regulatory body] may need to provide guidance on how certain regulatory requirements are to be fulfilled for various practices, for example in regulatory guideline documents.' In order to guide the user to achieve a good standard of protection and to achieve a consistent national approach to licensing and inspection, some countries have developed practice specific regulatory guidance, while others have practice specific regulations. National regulatory guidance is tailored to a country's own legislation and regulations for obvious reasons. This can lead to problems if the guidance is used in other States without appropriate modification to take local requirements into account. There would appear, therefore, to be scope for producing internationally harmonized guidance, while bearing in mind that the ultimate responsibility for the regulatory documents rests with the State. Some regions have taken the initiative of preparing guidance to facilitate the regional harmonization of regulatory control of certain common practices (e.g. radiology). In particular, it is felt that States participating in the IAEA's technical cooperation Model Project on Upgrading Radiation and Waste Safety Infrastructure would benefit significantly from the availability of practice specific guidance. Member States could then more readily develop their own guidance tailored to their own

  4. Offshore wind farm siting procedures applied offshore of Block Island, Rhode Island

    Science.gov (United States)

    O'Reilly, Christopher M.

    land. The REZ area is chosen as test site for the algorithm, and an optimal layout for the 5 turbines is found and discussed. Similarly the FAA tool is applied to the Block Island airport demonstrating the complexity of the FAA exclusionary area, and defining the limits of the exclusionary areas. The FAA regulation model is a geometric model in which all major (FAA) regulations within RI and the RI topography are embedded. The user specifies the dimension of the proposed turbines and an airport of interest, and a map of exclusionary zones specific to the turbine height and rules applying to the airport is generated. The model is validated for the entire state of Rhode Island. The micro-siting model finds the optimum placement of each turbine for a given number of turbines within an area. It includes the aerodynamic constraints (loss in wind speed within the wake of a turbine) associated to the deployment of arrays of turbines and the cable interconnection cost. It is combined with the technical, ecological, and social constraints used in the RIOSAMP macro-siting tool to provide a comprehensive micro-siting tool. In the optimization algorithm, a simple wake model and turbine-clustering algorithm are combined with the WIFSI in an objective function; the objective function is optimized with a genetic algorithm (GA).

  5. Statistical near-real-time accountancy procedures applied to AGNS (Allied General Nuclear Services) minirun data using PROSA

    Energy Technology Data Exchange (ETDEWEB)

    Beedgen, R.

    1988-03-01

    The computer program PROSA (PROgram for Statistical Analysis of near-real-time accountancy data) was developed as a tool to apply statistical test procedures to a sequence of materials balance results for detecting losses of material. First applications of PROSA to model facility data and real plant data showed that PROSA is also usable as a tool for process or measurement control. To deepen the experience for the application of PROSA to real data of bulk-handling facilities, we applied it to uranium data of the Allied General Nuclear Services miniruns, where accountancy data were collected on a near-real-time basis. Minirun 6 especially was considered, and the pulsed columns were chosen as materials balance area. The structure of the measurement models for flow sheet data and actual operation data are compared, and methods are studied to reduce the error for inventory measurements of the columns.

  6. Radiochromic film for dosimetric measurements in radiation shielding composites synthesized for applied in radiology procedures of high dose

    Energy Technology Data Exchange (ETDEWEB)

    Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)

  7. A Methods and procedures to apply probabilistic safety Assessment (PSA) techniques to the cobalt-therapy process. Cuban experience

    International Nuclear Information System (INIS)

    This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a

  8. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    OpenAIRE

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to...

  9. Modelling spatial heteroskedasticity by volatility modulated moving averages

    OpenAIRE

    Nguyen, Michele; Veraart, Almut E. D.

    2016-01-01

    Spatial heteroskedasticity refers to stochastically changing variances and covariances in space. Such features have been observed in, for example, air pollution and vegetation data. We study how volatility modulated moving averages can model this by developing theory, simulation and statistical inference methods. For illustration, we also apply our procedure to sea surface temperature anomaly data from the International Research Institute for Climate and Society.

  10. A Unique Procedure to Identify Cell Surface Markers Through a Spherical Self-Organizing Map Applied to DNA Microarray Analysis.

    Science.gov (United States)

    Sugii, Yuh; Kasai, Tomonari; Ikeda, Masashi; Vaidyanath, Arun; Kumon, Kazuki; Mizutani, Akifumi; Seno, Akimasa; Tokutaka, Heizo; Kudoh, Takayuki; Seno, Masaharu

    2016-01-01

    To identify cell-specific markers, we designed a DNA microarray platform with oligonucleotide probes for human membrane-anchored proteins. Human glioma cell lines were analyzed using microarray and compared with normal and fetal brain tissues. For the microarray analysis, we employed a spherical self-organizing map, which is a clustering method suitable for the conversion of multidimensional data into two-dimensional data and displays the relationship on a spherical surface. Based on the gene expression profile, the cell surface characteristics were successfully mirrored onto the spherical surface, thereby distinguishing normal brain tissue from the disease model based on the strength of gene expression. The clustered glioma-specific genes were further analyzed by polymerase chain reaction procedure and immunocytochemical staining of glioma cells. Our platform and the following procedure were successfully demonstrated to categorize the genes coding for cell surface proteins that are specific to glioma cells. Our assessment demonstrates that a spherical self-organizing map is a valuable tool for distinguishing cell surface markers and can be employed in marker discovery studies for the treatment of cancer.

  11. Objectively-assessed outcome measures: a translation and cross-cultural adaptation procedure applied to the Chedoke McMaster Arm and Hand Activity Inventory (CAHAI

    Directory of Open Access Journals (Sweden)

    Hahn Sabine

    2010-11-01

    Full Text Available Abstract Background Standardised translation and cross-cultural adaptation (TCCA procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO based on the existing TCCA procedure for patient-reported outcomes. The TCCA-OAO procedure was applied to develop a German version of the CAHAI (CAHAI-G. Inter-rater reliability of the CAHAI-G was determined through video rating of CAHAI-G. Validity evaluation of the CAHAI-G was assessed using the Chedoke-McMaster Stroke Assessment (CMSA. All ratings were performed by trained, independent raters. In a cross-sectional study, patients were tested within 31 hours after the initial CAHAI-G scoring, for their motor function level using the subscales for arm and hand of the CMSA. Inpatients and outpatients of the occupational therapy department who experienced a cerebrovascular accident or an intracerebral haemorrhage were included. Results Performance of 23 patients (mean age 69.4, SD 12.9; six females; mean time since stroke onset: 1.5 years, SD 2.5 years have been assessed. A high inter-rater reliability was calculated with ICCs for 4 CAHAI-G versions (13, 9, 8, 7 items ranging between r = 0.96 and r = 0.99 (p Conclusions The TCCA-OAO procedure was validated regarding its feasibility and applicability for objectively-assessed outcome measures. The resulting German CAHAI can be used as a valid and reliable assessment for bilateral upper limb performance in ADL in patients after stroke.

  12. TOXICITY CHARACTERISTIC LEACHING PROCEDURE APPLIED TO RADIOACTIVE SALTSTONE CONTAINING TETRAPHENYLBORATE: DEVELOPMENT OF A MODIFIED ZERO-HEADSPACE EXTRACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Crapse, K.; Cozzi, A.; Crawford, C.; Jurgensen, A.

    2006-09-30

    In order to assess the effect of extended curing times at elevated temperatures on saltstone containing Tank 48H waste, saltstone samples prepared as a part of a separate study were analyzed for benzene using a modification of the United States Environmental Protection Agency (USEPA) method 1311 Toxicity Characteristic Leaching Procedure (TCLP). To carry out TCLP for volatile organic analytes (VOA), such as benzene, in the Savannah River National Laboratory (SRNL) shielded cells (SC), a modified TCLP Zero-Headspace Extractor (ZHE) was developed. The modified method was demonstrated to be acceptable in a side by side comparison with an EPA recommended ZHE using nonradioactive saltstone containing tetraphenylborate (TPB). TCLP results for all saltstone samples tested containing TPB (both simulant and actual Tank 48H waste) were below the regulatory limit for benzene (0.5 mg/L). In general, higher curing temperatures corresponded to higher concentrations of benzene in TCLP extract. The TCLP performed on the simulant samples cured under the most extreme conditions (3000 mg/L TPB in salt and cured at 95 C for at least 144 days) resulted in benzene values that were greater than half the regulatory limit. Taking into account that benzene in TCLP extract was measured on the same order of magnitude as the regulatory limit, that these experimental conditions may not be representative of actual curing profiles found in the saltstone vault and that there is significant uncertainty associated with the precision of the method, it is recommended that to increase confidence in TCLP results for benzene, the maximum curing temperature of saltstone be less than 95 C. At this time, no further benzene TCLP testing is warranted. Additional verification would be recommended, however, should future processing strategies result in significant changes to salt waste composition in saltstone as factors beyond the scope of this limited study may influence the decomposition of TPB in saltstone.

  13. A new interpretive procedure for whole rock U-Pb Systems applied to the Vredefort crustal profile

    Science.gov (United States)

    Welke, H.; Nicolaysen, L. O.

    1981-11-01

    Granulite grade Precambrian gneisses have usually undergone at least one period of strong U depletion. Whole rock U-Pb isotope studies can determine the time(s) of the severe depletion, and this work attempts to place such studies on a more rigorous footing. Two-stage U-Pb systems can be described in terms of one major, episodic differentiation into rocks with varying U/Pb ratios, while three-stage systems can be described by two such distinct episodes. Most of the Precambrian granulites that have been isotopically analyzed have histories too complex to be described as two-stage systems. However, it is demonstrated here that U-Pb data on whole rock suites can yield the complete U-Pb chemical history of a three-stage system (in terms of U/Pb ratios). For a suite of granulites, present-day 207Pb/204Pb and 206Pb/204Pb ratios and element concentration data allow these ratios to be calculated at a number of specific past times and plotted as an array. The degree of scatter in each of these `past arrays' is graphed as a function of time. The point of least scatter denotes the age of the end of stage 2 in the history of the system. The array slope and the dating of the end of stage 2 also permit the beginning of stage 2 to be calculated. All other parameters in the system (U and Pb concentrations, Pb isotopic ratios) can now be determined for each individual rock throughout its history. The new interpretive method also distinguishes sensitively among various kinds of uranium fractionation which may have operated during the differentiation episodes. It is applied here to uranium-depleted granulites in the deeper part of the Vredefort crustal profile. The times of the two fractionating episodes are calculated at ˜3860 and ˜2760 m.y., respectively. The Vredefort 3070 m.y. event, when geochemical systems in the upper half of the crystalline basement became permanently closed, evidently had little significance for the lower half of the crystalline basement. Some fundamental

  14. Averaging anisotropic cosmologies

    International Nuclear Information System (INIS)

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of anisotropic pressure-free models. Adopting the Buchert scheme, we recast the averaged scalar equations in Bianchi-type form and close the standard system by introducing a propagation formula for the average shear magnitude. We then investigate the evolution of anisotropic average vacuum models and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. The presence of nonzero average shear in our equations also allows us to examine the constraints that a phase of backreaction-driven accelerated expansion might put on the anisotropy of the averaged domain. We close by assessing the status of these and other attempts to define and calculate 'average' spacetime behaviour in general relativity

  15. Average-energy games

    OpenAIRE

    Bouyer, Patricia; Markey, Nicolas; Randour, Mickael; Larsen, Kim G.; Laursen, Simon

    2015-01-01

    Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this ...

  16. On the way towards a generalized entropy maximization procedure

    OpenAIRE

    Bagci, G. Baris; Tirnakli, Ugur

    2008-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q is between [0,1] in contrast to the stationary distribution of the inverse power law ob...

  17. Averaged extreme regression quantile

    OpenAIRE

    Jureckova, Jana

    2015-01-01

    Various events in the nature, economics and in other areas force us to combine the study of extremes with regression and other methods. A useful tool for reducing the role of nuisance regression, while we are interested in the shape or tails of the basic distribution, is provided by the averaged regression quantile and namely by the average extreme regression quantile. Both are weighted means of regression quantile components, with weights depending on the regressors. Our primary interest is ...

  18. On the Averaging Principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and interchangibility is O(\\epsilon^2) equivalent to the outcome of the corresponding homogeneous model, where \\epsilon is the level of heterogeneity. We then use this averaging pr...

  19. Average Angular Velocity

    OpenAIRE

    Van Essen, H.

    2004-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to th...

  20. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  1. Gauge-Invariant Average of Einstein Equations for finite Volumes

    CERN Document Server

    Smirnov, Juri

    2014-01-01

    For the study of cosmological backreacktion an avaragng procedure is required. In this work a covariant and gauge invariant averaging formalism for finite volumes will be developed. This averaging will be applied to the scalar parts of Einstein's equations. For this purpose dust as a physical laboratory will be coupled to the gravitating system. The goal is to study the deviation from the homogeneous universe and the impact of this deviation on the dynamics of our universe. Fields of physical observers are included in the studied system and used to construct a reference frame to perform the averaging without a formal gauge fixing. The derived equations resolve the question whether backreaction is gauge dependent.

  2. Averaging anisotropic cosmologies

    CERN Document Server

    Barrow, J D; Barrow, John D.; Tsagas, Christos G.

    2006-01-01

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of pressure-free Bianchi-type models. Adopting the Buchert averaging scheme, we identify the kinematic backreaction effects by focussing on spacetimes with zero or isotropic spatial curvature. This allows us to close the system of the standard scalar formulae with a propagation equation for the shear magnitude. We find no change in the already known conditions for accelerated expansion. The backreaction terms are expressed as algebraic relations between the mean-square fluctuations of the models' irreducible kinematical variables. Based on these we investigate the early evolution of averaged vacuum Bianchi type $I$ universes and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. We also discuss the possibility of accelerated expansion due to ...

  3. Applying a new procedure to assess the controls on aggregate stability - including soil parent material and soil organic carbon concentrations - at the landscape scale

    Science.gov (United States)

    Turner, Gren; Rawlins, Barry; Wragg, Joanna; Lark, Murray

    2014-05-01

    Aggregate stability is an important physical indicator of soil quality and influences the potential for erosive losses from the landscape, so methods are required to measure it rapidly and cost-effectively. Previously we demonstrated a novel method for quantifying the stability of soil aggregates using a laser granulometer (Rawlins et al., 2012). We have developed our method further to mimic field conditions more closely by incorporating a procedure for pre-wetting aggregates (for 30 minutes on a filter paper) prior to applying the test. The first measurement of particle-size distribution is made on the water stable aggregates after these have been added to circulating water (aggregate size range 1000 to 2000 µm). The second measurement is made on the disaggregated material after the circulating aggregates have been disrupted with ultrasound (sonication). We then compute the difference between the mean weight diameters (MWD) of these two size distributions; we refer to this value as the disaggregation reduction (DR; µm). Soils with more stable aggregates, which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation, have larger values of DR. We made repeated analyses of DR using an aggregate reference material (RM; a paleosol with well-characterised disaggregation properties) and used this throughout our analyses to demonstrate our approach was reproducible. We applied our modified technique - and also the previous technique in which dry aggregates were used - to a set of 60 topsoil samples (depth 0-15 cm) from cultivated land across a large region (10 000 km2) of eastern England. We wished to investigate: (i) any differences in aggregate stability (DR measurements) using dry or pre-wet aggregates, and (ii) the dominant controls on the stability of aggregates in water using wet aggregates, including variations in mineralogy and soil organic carbon (SOC) content, and any interaction between them. The sixty soil

  4. Average Angular Velocity

    CERN Document Server

    Essén, H

    2003-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.

  5. On sparsity averaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2013-01-01

    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.

  6. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT; Procedimientos de Control de Calildad de las Camaras de Muones del Experimento CMS Construidas en el CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-07-01

    In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.

  7. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  8. The averaging principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of \\emph{differentiability} and \\emph{interchangibility}, is $O(\\epsilon^2)$ equivalent to the outcome of the corresponding homogeneous model, where $\\epsilon$ is the level of heterogeneity. We then us...

  9. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  10. Basics of averaging of the Maxwell equations

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2011-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for metamaterials, which is rather close to the case of compound materials but should include magnetic response of the inclusi...

  11. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  12. Some applications of stochastic averaging method for quasi Hamiltonian systems in physics

    Institute of Scientific and Technical Information of China (English)

    DENG MaoLin; ZHU WeiQiu

    2009-01-01

    Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for uasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics. In the present paper, the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced. The applications of the stochastic averaging method in studying the dynamics of active Brownian particles, the reaction rate theory, the dynamics of breathing and denaturation of DNA, and the Fermi resonance and its effect on the mean transition time are re-viewed.

  13. Some applications of stochastic averaging method for quasi Hamiltonian systems in physics

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics.In the present paper,the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced.The applications of the stochastic averaging method in studying the dynamics of active Brownian particles,the reaction rate theory,the dynamics of breathing and denaturation of DNA,and the Fermi resonance and its effect on the mean transition time are reviewed.

  14. 40 CFR Appendix B to Part 76 - Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers

    Science.gov (United States)

    2010-07-01

    ... Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers B Appendix B to Part 76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES... of Nitrogen Oxides Controls Applied to Group 1, Boilers 1. Purpose and Applicability This...

  15. When the article 475-j of Civil Procedure Code penalty of 10% is applied? note about the Superior Court of Justice precedent number 517

    Directory of Open Access Journals (Sweden)

    Felipe Scalabrin

    2015-06-01

    Full Text Available This article is aimed to clarify the doubts about the Civil Procedure Code article 475-J and its moment of incidence, mainly considering the approval of the precedent nº 517 of the Superior Court of Justice.

  16. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  17. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  18. Office of Inspector General report on Naval Petroleum Reserve Number 1, independent accountant`s report on applying agreed-upon procedures

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordance with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.

  19. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    Science.gov (United States)

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  20. Discrete Averaging Relations for Micro to Macro Transition

    Science.gov (United States)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  1. Applying 'Evidence-Based Medicine' Theory to Interventional Radiology.Part 2: A Spreadsheet for Swift Assessment of Procedural Benefit and Harm

    International Nuclear Information System (INIS)

    AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft ExcelTMwas used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS ExcelTMversion can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E

  2. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2008-01-01

    textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is inves

  3. Basics of averaging of the Maxwell equations for bulk materials

    OpenAIRE

    Chipouline, A.; Simovski, C.; Tretyakov, S.

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some b...

  4. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  5. Average Shape of Transport-Limited Aggregates

    Science.gov (United States)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  6. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin

  7. Multigrid solution for the compressible Euler equations by an implicit characteristic-flux-averaging

    Science.gov (United States)

    Kanarachos, A.; Vournas, I.

    A formulation of an implicit characteristic-flux-averaging method for the compressible Euler equations combined with the multigrid method is presented. The method is based on correction scheme and implicit Gudunov type finite volume scheme and is applied to two dimensional cases. Its principal feature is an averaging procedure based on the eigenvalue analysis of the Euler equations by means of which the fluxes are evaluated at the finite volume faces. The performance of the method is demonstrated for different flow problems around RAE-2922 and NACA-0012 airfoils and an internal flow over a circular arc.

  8. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  9. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, ΒΘ, is derived. A method for unobtrusively measuring the quantities used to evaluate ΒΘ in Extrap T1 is described. The results if a series of measurements yielding ΒΘ as a function of externally applied toroidal field are presented. (author)

  10. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  11. Applying IRT_ΔB Procedure and Adapted LR Procedure to Detect DIF in Tests with Matrix Sampling%IRT_Δb法和修正LR法对矩阵取样DIF检验的有效性

    Institute of Scientific and Technical Information of China (English)

    张勋; 李凌艳; 刘红云; 孙研

    2013-01-01

    Matrix sampling is a useful technique widely used in large-scale educational assessments. In an assessment with matrix sampling design, each examinee takes one of the multiple booklets with partial items. A critical problem of detecting differential item functioning (DIF) in such scenario has gained a lot of attention in recent years, which is, it is not appropriate to take the observed total score obtained from individual booklet as the matching variable in detecting the DIF. Therefore, the traditional detecting methods, such as Mantel-Haenszel (MH), SIBTEST, as well as Logistic Regression (LR) are not suitable. IRT_Δb might be an alternative due to its abilities to provide valid matching variable. However, the DIF classification criterion of IRT_Δb was not well established yet. Thus, the purpose of this study were: 1) to investigate the efficiency and robustness of using ability parameters obtained from Item Response Theory (IRT) model as the matching variable, comparing with the way using traditional observed raw total scores;2) to further identify what factors will influence the abilities in detecting DIF of two methods;3) to propose a DIF classification criteria for IRT_Δb. Simulated and empirical data were both employed in this study to explore the robustness and the efficiency of the two prevailing DIF detecting methods, which were the IRT_Δb method and the adapted LR method with the estimation of group-level ability based on IRT model as the matching variable. In the Monte Carlo study, a matrix sampling test was generated, and various experimental conditions were simulated as follows:1) different proportions of DIF items;2) different actual examinee ability distributions;3) different sample sizes;4) different size of DIF. Two DIF detection methods were then applied and results were compared. In addition, power functions were established in order to derive DIF classification rule for IRT_Δb based on current rules for LR. In the empirical study, through

  12. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  13. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  14. "Pricing Average Options on Commodities"

    OpenAIRE

    Kenichiro Shiraya; Akihiko Takahashi

    2010-01-01

    This paper proposes a new approximation formula for pricing average options on commodities under a stochastic volatility environment. In particular, it derives an option pricing formula under Heston and an extended lambda-SABR stochastic volatility models (which includes an extended SABR model as a special case). Moreover, numerical examples support the accuracy of the proposed average option pricing formula.

  15. Quantized average consensus with delay

    NARCIS (Netherlands)

    Jafarian, Matin; De Persis, Claudio

    2012-01-01

    Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co

  16. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  17. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  18. Stochastic averaging based on generalized harmonic functions for energy harvesting systems

    Science.gov (United States)

    Jiang, Wen-An; Chen, Li-Qun

    2016-09-01

    A stochastic averaging method is proposed for nonlinear vibration energy harvesters subject to Gaussian white noise excitation. The generalized harmonic transformation scheme is applied to decouple the electromechanical equations, and then obtained an equivalent nonlinear system which is uncoupled to an electric circuit. The frequency function is given through the equivalent potential energy which is independent of the total energy. The stochastic averaging method is developed by using the generalized harmonic functions. The averaged Itô equations are derived via the proposed procedure, and the Fokker-Planck-Kolmogorov (FPK) equations of the decoupled system are established. The exact stationary solution of the averaged FPK equation is used to determine the probability densities of the amplitude and the power of the stationary response. The procedure is applied to three different type Duffing vibration energy harvesters under Gaussian white excitations. The effects of the system parameters on the mean-square voltage and the output power are examined. It is demonstrated that quadratic nonlinearity only and quadratic combined with properly cubic nonlinearities can increase the mean-square voltage and the output power, respectively. The approximate analytical outcomes are qualitatively and quantitatively supported by the Monte Carlo simulations.

  19. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  20. Power convergence of Abel averages

    OpenAIRE

    Kozitsky, Yuri; Shoikhet, David; Zemanek, Jaroslav

    2012-01-01

    Necessary and sufficient conditions are presented for the Abel averages of discrete and strongly continuous semigroups, $T^k$ and $T_t$, to be power convergent in the operator norm in a complex Banach space. These results cover also the case where $T$ is unbounded and the corresponding Abel average is defined by means of the resolvent of $T$. They complement the classical results by Michael Lin establishing sufficient conditions for the corresponding convergence for a bounded $T$.

  1. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  2. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  3. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  4. Sparsity Averaging for Compressive Imaging

    CERN Document Server

    Carrillo, Rafael E; Van De Ville, Dimitri; Thiran, Jean-Philippe; Wiaux, Yves

    2012-01-01

    We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.

  5. Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei

    2016-09-01

    In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.

  6. Dependability in Aggregation by Averaging

    CERN Document Server

    Jesus, Paulo; Almeida, Paulo Sérgio

    2010-01-01

    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...

  7. Stochastic Approximation with Averaging Innovation

    CERN Document Server

    Laruelle, Sophie

    2010-01-01

    The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.

  8. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  9. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  10. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  11. A conversion formula for comparing pulse oximeter desaturation rates obtained with different averaging times.

    Directory of Open Access Journals (Sweden)

    Jan Vagedes

    Full Text Available OBJECTIVE: The number of desaturations determined in recordings of pulse oximeter saturation (SpO2 primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. METHODS: Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2-4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D. The whole procedure was carried out for 7 different minimal desaturation durations (≥ 1, ≥ 5, ≥ 10, ≥ 15, ≥ 20, ≥ 25, ≥ 30 s below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. RESULTS: Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1(c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. CONCLUSION: This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations.

  12. Average stress-average strain tension-stiffening relationships based on provisions of design codes

    Institute of Scientific and Technical Information of China (English)

    Gintaris KAKLAUSKAS; Viktor GRIBNIAK; Rokas GIRDZIUS

    2011-01-01

    This research was aimed at deriving average stress-average strain tension-stiffening relationships in accordance with the provisions of design codes for reinforced concrete (RC) members.Using a proposed inverse technique,the tension-stiffening relationships were derived from moment-curvature diagrams of RC beams calculated by different code methods,namely Eurocode 2,ACI 318,and the Chinese standard GB 50010-2002.The derived tension-stiffening laws were applied in a numerical study using the nonlinear finite element software ATENA.The curvatures calculated by ATENA and the code methods were in good agreement.

  13. Michel Parameters averages and interpretation

    International Nuclear Information System (INIS)

    The new measurements of Michel parameters in τ decays are combined to world averages. From these measurements model independent limits on non-standard model couplings are derived and interpretations in the framework of specific models are given. A lower limit of 2.5 tan β GeV on the mass of a charged Higgs boson in models with two Higgs doublets can be set and a 229 GeV limit on a right-handed W-boson in left-right symmetric models (95 % c.l.)

  14. Scaling crossover for the average avalanche shape

    Science.gov (United States)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  15. Basics of averaging of the Maxwell equations for bulk materials

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for bulk MM, which is rather close to the case of compound materials but should include magnetic response of the inclusions an...

  16. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages ob...

  17. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  18. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  19. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.

  20. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  1. The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)

    1992-01-01

    Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)

  2. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600.510-86 Calculation of average fuel economy. (a) Average fuel economy will be calculated to the...

  3. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    Science.gov (United States)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  4. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    Science.gov (United States)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  5. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    Science.gov (United States)

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  6. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is...... valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that...

  7. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  8. Bayes model averaging of cyclical decompositions in economic time series

    NARCIS (Netherlands)

    R.H. Kleijn (Richard); H.K. van Dijk (Herman)

    2003-01-01

    textabstractA flexible decomposition of a time series into stochastic cycles under possible non-stationarity is specified, providing both a useful data analysis tool and a very wide model class. A Bayes procedure using Markov Chain Monte Carlo (MCMC) is introduced with a model averaging approach whi

  9. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  10. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  11. Thermodynamic properties of average-atom interatomic potentials for alloys

    Science.gov (United States)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  12. Averaged Null Energy Condition from Causality

    CERN Document Server

    Hartman, Thomas; Tajdini, Amirhossein

    2016-01-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...

  13. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  15. Quantization Procedures; Sistemas de cuantificacion

    Energy Technology Data Exchange (ETDEWEB)

    Cabrera, J. A.; Martin, R.

    1976-07-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs.

  16. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  17. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  18. Applied Economics

    OpenAIRE

    Nicita, Alessandro

    2008-01-01

    Price responses are usually estimated for the average household. However, different households are unlikely to respond in a similar way to movement in prices. Consequently, relying on averages may be misleading when examining the behaviour of a particular group of households such as the poor. This article uses six household surveys collected in Mexico between 1989 and 2000 to derive price responses for 10 product groups and for five levels of income households. The estimated price elasticitie...

  19. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...

  20. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  1. Estimating PIGLOG Demands Using Representative versus Average Expenditure

    OpenAIRE

    Hahn, William F.; Taha, Fawzi A.; Davis, Christopher G.

    2013-01-01

    Economists often use aggregate time series data to estimate consumer demand functions. Some of the popular applied demand systems have a PIGLOG form. In the most general PIGLOG cases the “average” demand for a good is a function of the representative consumer expenditure not the average consumer expenditure. We would need detailed information on each period’s expenditure distribution to calculate the representative expenditure. This information is generally unavailable, so average expenditure...

  2. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent s

  3. Applied Electromagnetics

    International Nuclear Information System (INIS)

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  4. A space-averaged model of branched structures

    CERN Document Server

    Lopez, Diego; Michelin, Sébastien

    2014-01-01

    Many biological systems and artificial structures are ramified, and present a high geometric complexity. In this work, we propose a space-averaged model of branched systems for conservation laws. From a one-dimensional description of the system, we show that the space-averaged problem is also one-dimensional, represented by characteristic curves, defined as streamlines of the space-averaged branch directions. The geometric complexity is then captured firstly by the characteristic curves, and secondly by an additional forcing term in the equations. This model is then applied to mass balance in a pipe network and momentum balance in a tree under wind loading.

  5. 25 CFR 700.173 - Average net earnings of business or farm.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings...

  6. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  7. Environmental procedures

    International Nuclear Information System (INIS)

    The European Bank has pledged in its Agreement to place environmental management at the forefront of its operations to promote sustainable economic development in central and eastern Europe. The Bank's environmental policy is set out in the document titled, Environmental Management: The Bank's Policy Approach. This document, Environmental Procedures, presents the procedures which the European Bank has adopted to implement this policy approach with respect to its operations. The environmental procedures aim to: ensure that throughout the project approval process, those in positions of responsibility for approving projects are aware of the environmental implications of the project, and can take these into account when making decisions; avoid potential liabilities that could undermine the success of a project for its sponsors and the Bank; ensure that environmental costs are estimated along with other costs and liabilities; and identify opportunities for environmental enhancement associated with projects. The review of environmental aspects of projects is conducted by many Bank staff members throughout the project's life. This document defines the responsibilities of the people and groups involved in implementing the environmental procedures. Annexes contain Environmental Management: The Bank's Policy Approach, examples of environmental documentation for the project file and other ancillary information

  8. Averages of Values of L-Series

    OpenAIRE

    Alkan, Emre; Ono, Ken

    2013-01-01

    We obtain an exact formula for the average of values of L-series over two independent odd characters. The average of any positive moment of values at s = 1 is then expressed in terms of finite cotangent sums subject to congruence conditions. As consequences, bounds on such cotangent sums, limit points for the average of first moment of L-series at s = 1 and the average size of positive moments of character sums related to the class number are deduced.

  9. The Animals (Scientific Procedures) (Procedure for Representations) Rules 1986

    OpenAIRE

    Her Majesty's Stationary Office

    1986-01-01

    Section 12 of the Animals (Scientific Procedures) (Procedure for Representations) Rules 1986 to make representations to a legally qualified person appointed by the Secretary of State on a person who has applied for or holds a personal or project licence or a certificate of designation of a scientific procedure, breeding or supplying establishment under that Act where the Secretary of State proposes to refuse such a licence or certificate or to vary or revoke it otherwise than at the re...

  10. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  11. Bayesian Model Averaging in the Instrumental Variable Regression Model

    OpenAIRE

    Gary Koop; Robert Leon Gonzalez; Rodney Strachan

    2011-01-01

    This paper considers the instrumental variable regression model when there is uncertainly about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainly can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very fl...

  12. Finding large average submatrices in high dimensional data

    OpenAIRE

    Shabalin, Andrey A.; Weigman, Victor J.; Perou, Charles M.; Nobel, Andrew B

    2009-01-01

    The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. ...

  13. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  14. Decision-making Procedures

    DEFF Research Database (Denmark)

    Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher

    2009-01-01

    define procedures as mechanisms that influence the probabilities of reaching different endnodes. We show that for such procedural games a sequential psychological equilibrium always exists. Applying this approach within a principal-agent context we show that the way less attractive jobs are allocated is...... crucial for the effort exerted by agents. This prediction is tested in a field experiment, where some subjects had to type in data, whereas others had to verify the data inserted by the typists. The controllers' wage was 50% higher than that of the typists. In one treatment the less attractive typists...

  15. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  16. Developing Competency in Payroll Procedures

    Science.gov (United States)

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  17. A Favré averaged transition prediction model for hypersonic flows

    Institute of Scientific and Technical Information of China (English)

    LEE; ChunHian

    2010-01-01

    Transition prediction is crucial for aerothermodynamic and thermal protection system design of hypersonic vehicles.The compressible form of laminar kinetic energy equation is derived based on Favréaverage formality in the present paper.A closure of the equation is deduced and simplified under certain hypotheses and scaling analysis.A laminar-to-turbulent transition prediction procedure is proposed for high Mach number flows based on the modeled Favré-averaged laminar kinetic energy equation,in conjunction with the Favré-averaged Navier-Stokes equations.The proposed model,with and without associated explicit compressibility terms,is then applied to simulate flows over flared-cones with a free-stream Mach number of 5.91,and the onset locations of the boundary layer transition under different wall conditions are estimated.The computed onset locations are compared with those obtained by the model based on a compressibility correction deduced from the reference-temperature concept,together with experimental data.It is revealed that the present model gives a more favorable transition prediction for hypersonic flows.

  18. Inversion of the circular averages transform using the Funk transform

    International Nuclear Information System (INIS)

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering

  19. Average-cost based robust structural control

    Science.gov (United States)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  20. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  1. Applied optics

    International Nuclear Information System (INIS)

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed

  2. MEASUREMENT AND MODELLING AVERAGE PHOTOSYNTHESIS OF MAIZE

    OpenAIRE

    ZS LÕKE

    2005-01-01

    The photosynthesis of fully developed maize was investigated in the Agrometeorological Research Station Keszthely, in 2000. We used LI-6400 type measurement equipment to locate measurement points where the intensity of photosynthesis mostly nears the average. So later we could obtain average photosynthetic activities featuring the crop, with only one measurement. To check average photosynthesis of maize we used Goudriaan’s simulation model (CMSM) as well to calculate values on cloudless sampl...

  3. WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES

    Institute of Scientific and Technical Information of China (English)

    刘永平; 许贵桥

    2003-01-01

    This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.

  4. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  5. Stochastic averaging of quasi-Hamiltonian systems

    Institute of Scientific and Technical Information of China (English)

    朱位秋

    1996-01-01

    A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.

  6. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  7. Average sampling theorems for shift invariant subspaces

    Institute of Scientific and Technical Information of China (English)

    孙文昌; 周性伟

    2000-01-01

    The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.

  8. Average excitation potentials of air and aluminium

    NARCIS (Netherlands)

    Bogaardt, M.; Koudijs, B.

    1951-01-01

    By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu

  9. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    Science.gov (United States)

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  10. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    Science.gov (United States)

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  11. Clarifying the relationship between average excesses and average effects of allele substitutions

    Directory of Open Access Journals (Sweden)

    Jose M eÁlvarez-Castro

    2012-03-01

    Full Text Available Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one-locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance.

  12. A practical CBA-based screening procedure for identification of river basins where the costs of fulfilling the WFD requirements may be disproportionate – applied to the case of Denmark

    DEFF Research Database (Denmark)

    Jensen, Carsten Lynge; Jacobsen, Brian H.; Olsen, Søren Bøye;

    2013-01-01

    The European Union’s (EU) Water Framework Directive (WFD) is implemented as an instrument to obtain good ecological status in waterbodies of Europe. The directive recognises the need to accommodate social and economic considerations to obtain cost-effective implementation of the directive....... In particular, EU member states can apply for various exemptions from the objectives if costs are considered disproportionate, e.g. compared to potential benefits. This paper addresses the costs and benefits of achieving good ecological status and demonstrates a methodology designed to investigate...

  13. Applied mathematics

    International Nuclear Information System (INIS)

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed

  14. Applied geodesy

    International Nuclear Information System (INIS)

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  15. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  16. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  17. Averaging of Backscatter Intensities in Compounds

    Science.gov (United States)

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  18. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  19. Self-averaging characteristics of spectral fluctuations

    OpenAIRE

    Braun, Petr; Haake, Fritz

    2014-01-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found f...

  20. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...

  1. From moving averages to anomalous diffusion: a Rényi-entropy approach

    International Nuclear Information System (INIS)

    Moving averages, also termed convolution filters, are widely applied in science and engineering at large. As moving averages transform inputs to outputs by convolution, they induce correlation. In effect, moving averages are perhaps the most fundamental and ubiquitous mechanism of transforming uncorrelated inputs to correlated outputs. In this paper we study the correlation structure of general moving averages, unveil the Rényi-entropy meaning of a moving-average's overall correlation, address the maximization of this overall correlation, and apply this overall correlation to the dispersion-measurement and to the classification of regular and anomalous diffusion transport processes. (fast track communication)

  2. Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations

    Science.gov (United States)

    Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

    2011-03-01

    Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

  3. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  5. Average Vegetation Growth 1991 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1991 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  6. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  7. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  8. Average Vegetation Growth 1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  9. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  10. Average Vegetation Growth 2003 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  11. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using...

  12. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  13. Average Vegetation Growth 2002 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2002 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  14. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  15. Spacetime Average Density (SAD) Cosmological Measures

    CERN Document Server

    Page, Don N

    2014-01-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

  16. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  18. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  19. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets...

  20. 40 CFR 600.510-93 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... meet the minimum driving range requirements established by the Secretary of Transportation (49 CFR part... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...

  1. Modeling and Instability of Average Current Control

    OpenAIRE

    Fang, Chung-Chieh

    2012-01-01

    Dynamics and stability of average current control of DC-DC converters are analyzed by sampled-data modeling. Orbital stability is studied and it is found unrelated to the ripple size of the orbit. Compared with the averaged modeling, the sampled-data modeling is more accurate and systematic. An unstable range of compensator pole is found by simulations, and is predicted by sampled-data modeling and harmonic balance modeling.

  2. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  3. Disk-averaged synthetic spectra of Mars

    OpenAIRE

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a f...

  4. Yearly-averaged daily usefulness efficiency of heliostat surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Elsayed, M.M.; Habeebuallah, M.B.; Al-Rabghi, O.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia))

    1992-08-01

    An analytical expression for estimating the instantaneous usefulness efficiency of a heliostat surface is obtained. A systematic procedure is then introduced to calculate the usefulness efficiency even when overlapping of blocking and shadowing on a heliostat surface exist. For possible estimation of the reflected energy from a given field, the local yearly-averaged daily usefulness efficiency is calculated. This efficiency is found to depend on site latitude angle, radial distance from the tower measured in tower heights, heliostat position azimuth angle and the radial spacing between heliostats. Charts for the local yearly-averaged daily usefulness efficiency are presented for {phi} = 0, 15, 30, and 45 N. These charts can be used in calculating the reflected radiation from a given cell. Utilization of these charts is demonstrated.

  5. Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...

  6. Model averaging for semiparametric additive partial linear models

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.

  7. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  8. Light-cone averages in a swiss-cheese universe

    CERN Document Server

    Marra, Valerio; Matarrese, Sabino

    2007-01-01

    We analyze a toy swiss-cheese cosmological model to study the averaging problem. In our model, the cheese is the EdS model and the holes are constructed from a LTB solution. We study the propagation of photons in the swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the the expansion scalar is unaffected by the inhomogeneities. This is because of spherical symmetry. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the concordance model. Although the sole source in the swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we ...

  9. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  10. Comparison of Mouse Brain DTI Maps Using K-space Average, Image-space Average, or No Average Approach

    OpenAIRE

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-01-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data was collected from five ...

  11. 多措并举加强适用简易程序刑事案件公诉人出庭支持公诉%Taking measures to strengthen the prosecu- tors" appearing in court to support the pros- ecution in criminal cases applied summary procedure.

    Institute of Scientific and Technical Information of China (English)

    肖红

    2012-01-01

    Applying summary procedures to handling criminal cases that the defendant pleads guilty is the inevitable choice to solve the contradictions that cases are more than oflficers in the procuratorial organs in recent years, but also the inevitable re- quirement of maintaining justice and pro- tecting the legitimate rights and interests of the parties. This article aims to analyze the situation that the prosecutors" appearing in court in criminal cases applied summary procedure interpret the impact and chal- lenges of the amendment of the Code of Criminal Procedure to the grass-roots procuratorates work, and explore new ini- tiatives to implement the prosecutors" ap- pearing in court to support the prosecution in criminal cases applied summary proce- dure.%适用简易程序办理被告人认罪的刑事案件,是解决近年来检察机关案多人少矛盾的必然选择,也是维护司法公正,保障当事人合法权益的必然要求。本文旨在通过对简易程序刑事案件公诉人出庭现状进行分析,解读刑事诉讼法修改对基层检察院工作带来的影响和挑战,探求践行简易程序刑事案件公诉人出庭支持公诉的新举措。

  12. Calibration procedure for zenith plummets

    Directory of Open Access Journals (Sweden)

    Jelena GUČEVIĆ

    2013-09-01

    Full Text Available Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and some selected results.

  13. Calibration procedure for zenith plummets

    OpenAIRE

    Jelena GUČEVIĆ; Delčev, Siniša; Vukan OGRIZOVIĆ

    2013-01-01

    Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and som...

  14. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  15. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  16. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  17. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  18. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  19. Average Cycle Period in Asymmetrical Flashing Ratchet

    Institute of Scientific and Technical Information of China (English)

    WANG Hai-Yan; HE Hou-Sheng; BAO Jing-Dong

    2005-01-01

    The directed motion of a Brownian particle in a flashing potential with various transition probabilities and waiting times in one of two states is studied. An expression for the average cycle period is proposed and the steady current J of the particle is calculated via Langevin simulation. The results show that the optimal cycle period rm,which takes the maximum of J, is shifted to a small value when the transition probability λ from the potential on to the potential off decreases, the maximalcurrent appears in the case of the average waiting time in the potential on being longer than in the potential off, and the direction of current depends on the ratio of the average times waiting in two states.

  20. Average Cross Section Evaluation - Room for Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Frohner, G.H. [Forschungszentrum Karlsruhe Institut fur Kern- und Energietechnik, Karlsruhe (Germany)

    2006-07-01

    Full text of publication follows: Techniques for evaluation of average nuclear cross sections are well established. Nevertheless there seems room for improvement. Heuristic expressions for average partial cross sections of the Hauser-Feshbach type with width fluctuation corrections could be replaced by the correct GOE triple integral. Transmission coefficients derived from macroscopic models (optical, single and double hump fission barrier, etc) lead to better descriptions of cross section behaviour over wide energy ranges. At higher energies (n,{gamma}n') reactions compete with radiative capture (Moldauer effect). In all cross section modeling one must distinguish properly between average S- and R-matrix parameters. The exact relationship between them is given, as well as the connection to Endf format rules. Fitting codes (e.g. FITACS) should be able to digest observed data directly, instead of only reduced data corrected already for self shielding and multiple scattering (e.g. with SESH). (author)

  1. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  2. Books Average Previous Decade of Economic Misery

    OpenAIRE

    R Alexander Bentley; Alberto Acerbi; Paul Ormerod; Vasileios Lampos

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is signific...

  3. An improved moving average technical trading rule

    Science.gov (United States)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  4. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  5. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  6. Software Release Procedure and Tools

    OpenAIRE

    Giammatteo, Gabriele; Frosini, Luca; Laskaris, Nikolas

    2015-01-01

    Deliverable D4.1 - "Software Release Procedures and Tools" aims to provide a detailed description of the procedures applied and tools used to manage releases of the gCube System within Work Package 4. gCube System is the software at the basis of all VREs applications, data management services and portals. Given the large size of the gCube system, its high degree of modularity and the number of developers involved in the implementation, a set of procedures that formalize and simplify the integ...

  7. The Law of Aggregate Demand : Empirical Evidence From India Using Nonparametric Direct Average Derivative Estimation procedure

    OpenAIRE

    Chakrabarty, Manisha

    2001-01-01

    This paper attempts to provide empirical evidence of the positive definiteness of the mean income effect matrix, a sufficient condition for market demand to satisfy the {\\it law of demand} derived by H\\"{a}rdle, Hildenbrand and Jerison [HHJ(1991)]. Increasing heterogeneity in spending of populations of households leads to this sufficient condition which is falsifiable from cross-section data. Based on this framework we use the National Sample Survey (NSS) 50-th round data (1993-1994) for the ...

  8. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  9. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  10. Discontinuities and hysteresis in quantized average consensus

    NARCIS (Netherlands)

    Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo

    2011-01-01

    We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering

  11. Error estimates on averages of correlated data

    International Nuclear Information System (INIS)

    We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations. (orig.)

  12. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  13. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  14. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  15. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...

  16. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  17. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...

  18. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    C. Chiarella; X.Z. He; C.H. Hommes

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use

  19. Cortical evoked potentials recorded from the guinea pig without averaging.

    Science.gov (United States)

    Walloch, R A

    1975-01-01

    Potentials evoked by tonal pulses and recorded with a monopolar electrode on the pial surface over the auditory cortex of the guinea pig are presented. These potentials are compared with average potentials recorded in previous studies with an electrode on the dura. The potentials recorded by these two techniques have similar waveforms, peak latencies and thresholds. They appear to be generated within the same region of the cerebral cortex. As can be expected, the amplitude of the evoked potentials recorded from the pial surface is larger than that recorded from the dura. Consequently, averaging is not needed to extract the evoked potential once the dura is removed. The thresholds for the evoked cortical potential are similar to behavioral thresholds for the guinea pig at high frequencies; however, evoked potential thresholds are eleveate over behavioral thresholds at low frequencies. The removal of the dura and the direct recording of the evoked potential appears most appropriate for acute experiments. The recording of an evoked potential with dura electrodes empploying averaging procedures appears most appropriate for chronic studies.

  20. Tsallis’ entropy maximization procedure revisited

    Science.gov (United States)

    Martínez, S.; Nicolás, F.; Pennini, F.; Plastino, A.

    2000-11-01

    The proper way of averaging is an important question with regards to Tsallis’ Thermostatistics. Three different procedures have been thus far employed in the pertinent literature. The third one, i.e., the Tsallis-Mendes-Plastino (TMP) (Physica A 261 (1998) 534) normalization procedure, exhibits clear advantages with respect to earlier ones. In this work, we advance a distinct (from the TMP-one) way of handling the Lagrange multipliers involved in the extremization process that leads to Tsallis’ statistical operator. It is seen that the new approach considerably simplifies the pertinent analysis without losing the beautiful properties of the Tsallis-Mendes-Plastino formalism.

  1. Tsallis' entropy maximization procedure revisited

    CERN Document Server

    Martínez, S; Pennini, F; Plastino, A

    2000-01-01

    The proper way of averaging is an important question with regards to Tsallis' Thermostatistics. Three different procedures have been thus far employed in the pertinent literature. The third one, i.e., the Tsallis-Mendes-Plastino (TMP) normalization procedure, exhibits clear advantages with respect to earlier ones. In this work, we advance a distinct (from the TMP-one) way of handling the Lagrange multipliers involved in the extremization process that leads to Tsallis' statistical operator. It is seen that the new approach considerably simplifies the pertinent analysis without losing the beautiful properties of the Tsallis-Mendes-Plastino formalism.

  2. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  3. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... can be efficiently computed, we immediately gain scalability. GA is inherently more robust than PCA, but we show that they coincide for Gaussian data. We exploit that averages can be made robust to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. Robustness can be with respect......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  4. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  5. Average Regression-Adjusted Controlled Regenerative Estimates

    OpenAIRE

    Lewis, Peter A.W.; Ressler, Richard

    1991-01-01

    Proceedings of the 1991 Winter Simulation Conference Barry L. Nelson, W. David Kelton, Gordon M. Clark (eds.) One often uses computer simulations of queueing systems to generate estimates of system characteristics along with estimates of their precision. Obtaining precise estimates, espescially for high traffic intensities, can require large amounts of computer time. Average regression-adjusted controlled regenerative estimates result from combining the two techniques ...

  6. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  7. Average Drift Analysis and Population Scalability

    OpenAIRE

    He, Jun; Yao, Xin

    2013-01-01

    This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift...

  8. On Heroes and Average Moral Human Beings

    OpenAIRE

    Kirchgässner, Gebhard

    2001-01-01

    After discussing various approaches about heroic behaviour in the literature, we first give a definition and classification of moral behaviour, in distinction to intrinsically motivated and ‘prudent' behaviour. Then, we present some arguments on the function of moral behaviour according to ‘minimal' standards of the average individual in a modern democratic society, before we turn to heroic behaviour. We conclude with some remarks on methodological as well as social problems which arise or ma...

  9. Time-dependent angularly averaged inverse transport

    OpenAIRE

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured al...

  10. A Visibility Graph Averaging Aggregation Operator

    OpenAIRE

    Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

    2013-01-01

    The problem of aggregation is considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compare...

  11. Dollar-Cost Averaging: An Investigation

    OpenAIRE

    Fang, Wei

    2007-01-01

    Dollar-cost Averaging (DCA) is a common and useful systematic investment strategy for mutual fund managers, private investors, financial analysts and retirement planners. The issue of performance effectiveness of DCA is greatly controversial among academics and professionals. As a popularly recommended investment strategy, DCA is recognized as a risk reduction strategy; however, the advantage was claimed as the expense of generating higher returns. The dissertation is to intensively inves...

  12. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  13. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    International Nuclear Information System (INIS)

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-barP, the average, U-bar, the effective, Ueff or the maximum peak, UP tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-barp voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak kPPV,kVp and the average kPPV,Uav conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-barp and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  14. Geomagnetic effects on the average surface temperature

    Science.gov (United States)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  15. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  16. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  17. STRONG APPROXIMATION FOR MOVING AVERAGE PROCESSES UNDER DEPENDENCE ASSUMPTIONS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Let {Xt, t ≥ 1} be a moving average process defined by Xt = ∞Σk=0akξt-k,where {ak,k ≥ 0} is a sequence of real numbers and {ξt,-∞< t <∞} is a doubly infinite sequence of strictly stationary dependent random variables. Under the conditions of {ak, k ≥ 0} which entail that {Xt, t ≥ 1} is either a long memory process or a linear process, the strong approximation of {Xt, t ≥ 1} to a Gaussian process is studied. Finally,the results are applied to obtain the strong approximation of a long memory process to a fractional Brownian motion and the laws of the iterated logarithm for moving average processes.

  18. Application of Network-averaged Teleseismic P-wave Spectra to Seismic Yield Estimation of Underground Nuclear Explosions

    Science.gov (United States)

    Murphy, J. R.; Barker, B. W.

    - A set of procedures is described for estimating network-averaged teleseismic P-wave spectra for underground nuclear explosions and for analytically inverting these spectra to obtain estimates of mb/yield relations and individual yields for explosions at previously uncalibrated test sites. These procedures are then applied to the analyses of explosions at the former Soviet test sites at Shagan River, Degelen Mountain, Novaya Zemlya and Azgir, as well as at the French Sahara, U.S. Amchitka and Chinese Lop Nor test sites. It is demonstrated that the resulting seismic estimates of explosion yield and mb/yield relations are remarkably consistent with a variety of other available information for a number of these test sites. These results lead us to conclude that the network-averaged teleseismic P-wave spectra provide considerably more diagnostic information regarding the explosion seismic source than do the corresponding narrowband magnitude measures such as mb, Ms and mb(Lg), and, therefore, that they are to be preferred for applications to seismic yield estimation for explosions at previously uncalibrated test sites.

  19. Hyperspectral imaging based procedures applied to bottom ash characterization

    Science.gov (United States)

    Bonifazi, Giuseppe; Serranti, Silvia

    2007-09-01

    Bottom ash from Municipal Solid Waste Incinerators (MSWIs) is mainly land filled or used as material for the foundation of road in European countries. Bottom ash is usually first crushed to below 40 mm and separated magnetically to recover the steel scrap. The remaining material contains predominantly sand, sinters and pieces of stone, glass and ceramics, which could be used as building material if strict technical and environmental requirements are respected. The main problem is the presence of residual organic matter in the ash and the large surface area presented by the fine fraction that creates leaching values, for elements such as copper, that are above the accepted levels for standard building materials. Main aim of the study was to evaluate the possibility offered by hyperspectral imaging to identify organic matter inside the residues in order to develop control/selection strategies to be implemented inside the bottom ash recycling plant. Reflectance spectra of selected bottom ash samples have been acquired in the VIS-NIR field (400- 1000 nm). Results showed as the organic content of the different samples influences the spectral signatures, in particular an inverse correlation between reflectance level and organic matter content was found.

  20. Bayesian Model Averaging and Weighted Average Least Squares : Equivariance, Stability, and Numerical Issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa

  1. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  2. Analytical network-averaging of the tube model:. Rubber elasticity

    Science.gov (United States)

    Khiêm, Vu Ngoc; Itskov, Mikhail

    2016-10-01

    In this paper, a micromechanical model for rubber elasticity is proposed on the basis of analytical network-averaging of the tube model and by applying a closed-form of the Rayleigh exact distribution function for non-Gaussian chains. This closed-form is derived by considering the polymer chain as a coarse-grained model on the basis of the quantum mechanical solution for finitely extensible dumbbells (Ilg et al., 2000). The proposed model includes very few physically motivated material constants and demonstrates good agreement with experimental data on biaxial tension as well as simple shear tests.

  3. Token Systems: A Procedural Guide

    Science.gov (United States)

    Sattler, Howard E.; Swoope, Karen S.

    1970-01-01

    A primary purpose of the token system is to expand the operant model in order to provide a broader reinforcement base. The ten procedural considerations necessary for implementing a token system are presented. Beyond this, fundamental knowledge of operant principles remains essential in order to apply the token system correctly. (Author/KJ)

  4. Evaluating Prevention and Intervention Procedures.

    Science.gov (United States)

    Sullivan, Arthur P.; And Others

    1986-01-01

    States the process-outcome research and evaluation paradigm applied to alcohol and substance abuse prevention and intervention programs. Shows its application to efforts to improve students' and patients' self-esteem to be deficient in certain aspects and advocates additions to the evaluation procedures, most notably analysis of in-session change.…

  5. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  6. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  7. Time-dependent angularly averaged inverse transport

    CERN Document Server

    Bal, Guillaume

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.

  8. PROFILE OF HIRED FARMWORKERS, 1998 ANNUAL AVERAGES

    OpenAIRE

    Runyan, Jack L.

    2000-01-01

    An average of 875,000 persons 15 years of age and older did hired farmwork each week as their primary job in 1998. An additional 63,000 people did hired farmwork each week as their secondary job. Hired farmworkers were more likely than the typical U.S. wage and salary worker to be male, Hispanic, younger, less educated, never married, and not U.S. citizens. The West (42 percent) and South (31.4 percent) census regions accounted for almost three-fourths of the hired farmworkers. The rate of un...

  9. Fluctuations of wavefunctions about their classical average

    Energy Technology Data Exchange (ETDEWEB)

    Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)

    2003-02-07

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  10. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  11. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  12. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  13. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε(∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε(∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  14. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  15. Rademacher averages on noncommutative symmetric spaces

    CERN Document Server

    Merdy, Christian Le

    2008-01-01

    Let E be a separable (or the dual of a separable) symmetric function space, let M be a semifinite von Neumann algebra and let E(M) be the associated noncommutative function space. Let $(\\epsilon_k)_k$ be a Rademacher sequence, on some probability space $\\Omega$. For finite sequences $(x_k)_k of E(M), we consider the Rademacher averages $\\sum_k \\epsilon_k\\otimes x_k$ as elements of the noncommutative function space $E(L^\\infty(\\Omega)\\otimes M)$ and study estimates for their norms $\\Vert \\sum_k \\epsilon_k \\otimes x_k\\Vert_E$ calculated in that space. We establish general Khintchine type inequalities in this context. Then we show that if E is 2-concave, the latter norm is equivalent to the infimum of $\\Vert (\\sum y_k^*y_k)^{{1/2}}\\Vert + \\Vert (\\sum z_k z_k^*)^{{1/2}}\\Vert$ over all $y_k,z_k$ in E(M) such that $x_k=y_k+z_k$ for any k. Dual estimates are given when E is 2-convex and has a non trivial upper Boyd index. We also study Rademacher averages for doubly indexed families of E(M).

  16. Motional averaging in a superconducting qubit.

    Science.gov (United States)

    Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S

    2013-01-01

    Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

  17. Intensity contrast of the average supergranule

    CERN Document Server

    Langfellner, J; Gizon, L

    2016-01-01

    While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...

  18. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  19. GRID PRICING VERSUS AVERAGE PRICING FOR SLAUGHTER CATTLE: AN EMPIRICAL ANALYSIS

    OpenAIRE

    Scott W. Fausti; Qasmi, Bashir A.

    1999-01-01

    The paper compares weekly producer revenue under grid pricing and average dressed weight pricing methods for 2560 cattle over a period of 102 weeks. Regression analysis is applied to identify factors affecting the revenue differential.

  20. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    OpenAIRE

    Pawel Szczesniak

    2015-01-01

    In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two m...

  1. Average resonance parameters evaluation for actinides

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    New evaluated <{Gamma}{sub n}{sup 0}> and values for {sup 238}U, {sup 237}Np, {sup 243}Cm, {sup 245}Cm, {sup 246}Cm and {sup 241}Am nuclei in the resolved resonance region are presented. The applied method based on the idea that experimental resonance missing results in correlated changes of reduced neutron widths and level spacings distributions is discussed. (author)

  2. Lidar profilers in the context of wind energy–a verification procedure for traceable measurements

    DEFF Research Database (Denmark)

    Gottschall, Julia; Courtney, Michael; Wagner, Rozenn;

    2012-01-01

    a repeatable test. Second, a linear regression is applied to the data for each height. The third step is a bin-average analysis of the lidar error, i.e. the difference between the lidar and reference measurements, forming the basis for the ensuing uncertainty estimation. The results of the verification test...... are both used to correct the lidar measurements and to derive a corresponding uncertainty budget. A significant limitation of the procedure is the considerable uncertainty introduced by the reference sensors themselves. The decision as to whether to apply the derived correction as a lidar calibration...

  3. Computation of the Metric Average of 2D Sets with Piecewise Linear Boundaries

    OpenAIRE

    Kels, Shay; Dyn, Nira; Lipovetsky, Evgeny

    2010-01-01

    The metric average is a binary operation between sets in Rn which is used in the approximation of set-valued functions. We introduce an algorithm that applies tools of computational geometry to the computation of the metric average of 2D sets with piecewise linear boundaries.

  4. Explicit expressions and recurrence formulas of radial average value for N-dimensional hydrogen atom

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper, two recurrence formulas for radial average values of N-dimensional hydrogen atom are derived. Explicit results can be applied to discuss average value of centrifugal potential energy and other physical quantities. The relevant results of the usual hydrogen atom are contained in more general conclusion of this paper as special cases.

  5. COMPLEX INNER PRODUCT AVERAGING METHOD FOR CALCULATING NORMAL FORM OF ODE

    Institute of Scientific and Technical Information of China (English)

    陈予恕; 孙洪军

    2001-01-01

    This paper puts forward a complex inner product averaging method for calculating normal form of ODE. Compared with conventional averaging method, the theoretic analytical process has such simple forms as to realize computer program easily.Results can be applied in both autonomous and non-autonomous systems. At last, an example is resolved to verify the method.

  6. Average prime-pair counting formula

    Science.gov (United States)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  7. Hedge algorithm and Dual Averaging schemes

    CERN Document Server

    Baes, Michel

    2011-01-01

    We show that the Hedge algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular instance of Dual Averaging schemes, which have recently been introduced by Nesterov for regret minimization. Based on this interpretation, we establish three alternative methods of the Hedge algorithm: one in the form of the original method, but with optimal parameters, one that requires less a priori information, and one that is better adapted to the context of the Hedge algorithm. All our modified methods have convergence results that are better or at least as good as the performance guarantees of the vanilla method. In numerical experiments, our methods significantly outperform the original scheme.

  8. The Lang-Trotter Conjecture on Average

    OpenAIRE

    Baier, Stephan

    2006-01-01

    For an elliptic curve $E$ over $\\ratq$ and an integer $r$ let $\\pi_E^r(x)$ be the number of primes $p\\le x$ of good reduction such that the trace of the Frobenius morphism of $E/\\fie_p$ equals $r$. We consider the quantity $\\pi_E^r(x)$ on average over certain sets of elliptic curves. More in particular, we establish the following: If $A,B>x^{1/2+\\epsilon}$ and $AB>x^{3/2+\\epsilon}$, then the arithmetic mean of $\\pi_E^r(x)$ over all elliptic curves $E$ : $y^2=x^3+ax+b$ with $a,b\\in \\intz$, $|a...

  9. Averaging lifetimes for B hadron species

    International Nuclear Information System (INIS)

    The measurement of the lifetimes of the individual B species are of great interest. Many of these measurements are well below the 10% level of precision. However, in order to reach the precision necessary to test the current theoretical predictions, the results from different experiments need to be averaged together. Therefore, the relevant systematic uncertainties of each measurement need to be well defined in order to understand the correlations between the results from different experiments. In this paper we discuss the dominant sources of systematic errors which lead to correlations between the different measurements. We point out problems connected with the conventional approach of combining lifetime data and discuss methods which overcome these problems. (orig.)

  10. Average transverse momentum quantities approaching the lightfront

    CERN Document Server

    Boer, Daniel

    2014-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

  11. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  12. Loss of lifetime due to radiation exposure-averaging problems.

    Science.gov (United States)

    Raicević, J J; Merkle, M; Ehrhardt, J; Ninković, M M

    1997-04-01

    A new method is presented for assessing a years of life lost (YLL) due to stochastic effects caused by the exposure to ionizing radiation. The widely accepted method from the literature uses a ratio of means of two quantities, defining in fact the loss of life as a derived quantity. We start from the real stochastic nature of the quantity (YLL), which enables us to obtain its mean values in a consistent way, using the standard averaging procedures, based on the corresponding joint probability density functions needed in this problem. Our method is mathematically different and produces lower values of average YLL. In this paper we also found certain similarities with the concept of loss of life expectancy among exposure induced deaths (LLE-EID), which is accepted in the recently published UNSCEAR report, where the same quantity is defined as years of life lost per radiation induced case (YLC). Using the same data base, the YLL and the LLE-EID are calculated and compared for the simplest exposure case-the discrete exposure at age a. It is found that LLE-EID overestimates the YLL, and that the magnitude of this overestimation reaches more than 15%, which depends on the effect under consideration. PMID:9119679

  13. Resonance averaged channel radiative neutron capture cross sections

    International Nuclear Information System (INIS)

    In order to apply Lane amd Lynn's channel capture model in calculations with a realistic optical model potential, we have derived an approximate wave function for the entrance channel in the neutron-nucleus reaction, based on the intermediate interaction model. It is valid in the exterior region as well as the region near the nuclear surface, ans is expressed in terms of the wave function and reactance matrix of the optical model and of the near-resonance parameters. With this formalism the averaged channel radiative neutron capture cross section in the resonance region is written as the sum of three terms. The first two terms correspond to contribution of the optical model real and imaginary parts respectively, and together can be regarded as the radiative capture of the shape elastic wave. The third term is a fluctuation term, corresponding to the radiative capture of the compound elastic wave in the exterior region. On applying this theory in the resonance region, we obtain an expression for the average valence radiative width similar to that of Lane and Mughabghab. We have investigated the magnitude and energy dependence of the three terms as a function of the neutron incident energy. Calculated results for 98Mo and 55Mn show that the averaged channel radiative capture cross section in the giant resonance region of the neutron strength function may account for a considerable fraction of the total (n, γ) cross section; at lower neutron energies a large part of this channel capture arises from the fluctuation term. We have also calculated the partial capture cross section in 98Mo and 55Mn at 2.4 keV and 24 keV, respectively, and compared the 98Mo results with the experimental data. (orig.)

  14. Procedural justice and intragroup status: Knowing where we stand in a group enhances reactions to procedures

    OpenAIRE

    Prooijen, J.-W. van; Bos, K. van den; Wilke, H.A.M.

    2005-01-01

    The current research investigates the role of relative intragroup status as a moderator of peoples reactions to procedural justice. Based on a review of the procedural justice literature, the authors argue that information about intragroup status influences peoples reactions to variations in procedural justice. In correspondence with predictions, two experiments show that reactions of people who have been informed about their intragroup status position (either low, average, or high) are influ...

  15. Development of an Advanced Flow Meter using the Averaging Bi-directional Flow Tube

    International Nuclear Information System (INIS)

    Advanced flow meter using the concept of averaging bi-directional flow tube was developed. To find characteristics of flow meter and derive theory of measurement in the single and two phase flow condition, some basic tests were attempted using flow meters with diameters of 27, 80 and 200 mm. The CFD(computational fluid dynamics) calculation was also performed to find the effects of temperature and pressure, and to optimize design of a prototypic flow meter. Following this procedure, prototypical flow meters with diameters of 200 and 500 mm were designed and manufactured. It is aimed to use in the region in which calibration constant was unchanged. The stress analysis showed that the proposed flow meter of H-beam shape is inherently strong against the bending force induced by flow. The flow computer was developed for the flow rate calculation from the measured pressure difference. In this study, the performance test using this prototype flow meter was carried out. The developed flow meter can be applied in the wide range of pressure and temperature. The basic tests showed that the lineality of the proposed flow meter is ± 0.5 % of full scale and flow turn down ratio is 1:20 where the Reynolds number is larger than 10,000

  16. Medical decision making for patients with Parkinson disease under Average Cost Criterion.

    Science.gov (United States)

    Goulionis, John E; Vozikis, Athanassios

    2009-01-01

    Parkinson's disease (PD) is one of the most common disabling neurological disorders and results in substantial burden for patients, their families and the as a whole society in terms of increased health resource use and poor quality of life. For all stages of PD, medication therapy is the preferred medical treatment. The failure of medical regimes to prevent disease progression and to prevent long-term side effects has led to a resurgence of interest in surgical procedures. Partially observable Markov decision models (POMDPs) are a powerful and appropriate technique for decision making. In this paper we applied the model of POMDP's as a supportive tool to clinical decisions for the treatment of patients with Parkinson's disease. The aim of the model was to determine the critical threshold level to perform the surgery in order to minimize the total lifetime costs over a patient's lifetime (where the costs incorporate duration of life, quality of life, and monetary units). Under some reasonable conditions reflecting the practical meaning of the deterioration and based on the various diagnostic observations we find an optimal average cost policy for patients with PD with three deterioration levels. PMID:19549341

  17. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  18. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  19. Safety analysis procedures for PHWR

    Energy Technology Data Exchange (ETDEWEB)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4.

  20. 48 CFR 25.408 - Procedures.

    Science.gov (United States)

    2010-10-01

    ... FOREIGN ACQUISITION Trade Agreements 25.408 Procedures. (a) If the WTO GPA or an FTA applies (see 25.401...) Provide unsuccessful offerors from WTO GPA or FTA countries notice in accordance with 14.409-1 or...

  1. 48 CFR 3009.570-3 - Procedures.

    Science.gov (United States)

    2010-10-01

    ... system of systems, if the offeror— (i) Has no direct financial interest in such systems, the contracting... (HSAR) 48 CFR 3009.570-2(a); and (c) Apply the following procedures: (1) After assessing the...

  2. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    Science.gov (United States)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  3. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  4. Global Average Brightness Temperature for April 2003

    Science.gov (United States)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1 This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  5. An integrated approach to investigate the reach-averaged bend scale dynamics of large meandering rivers

    Science.gov (United States)

    Monegaglia, Federico; Henshaw, Alex; Zolezzi, Guido; Tubino, Marco

    2016-04-01

    Planform development of evolving meander bends is a beautiful and complex dynamic phenomenon, controlled by the interplay among hydrodynamics, sediments and floodplain characteristics. In the past decades, morphodynamic models of river meandering have provided a thorough understanding of the unit physical processes interacting at the reach scale during meander planform evolution. On the other hand, recent years have seen advances in satellite geosciences able to provide data with increasing resolution and earth coverage, which are becoming an important tool for studying and managing river systems. Analysis of the planform development of meandering rivers through Landsat satellite imagery have been provided in very recent works. Methodologies for the objective and automatic extraction of key river development metrics from multi-temporal satellite images have been proposed though often limited to the extraction of channel centerlines, and not always able to yield quantitative data on channel width, migration rates and bed morphology. Overcoming such gap would make a major step forward to integrate morphodynamic theories, models and real-world data for an increased understanding of meandering river dynamics. In order to fulfill such gaps, a novel automatic procedure for extracting and analyzing the topography and planform dynamics of meandering rivers through time from satellite images is implemented. A robust algorithm able to compute channel centerline in complex contexts such as the presence of channel bifurcations and anabranching structures is used. As a case study, the procedure is applied to the Landsat database for a reach of the well-known case of Rio Beni, a large, suspended load dominated, tropical meandering river flowing through the Bolivian Amazon Basin. The reach-averaged evolution of single bends along Rio Beni over a 30 years period is analyzed, in terms of bend amplification rates computed according to the local centerline migration rate. A

  6. Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables

    Science.gov (United States)

    Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente

    2011-01-01

    Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…

  7. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  8. New procedure for departure formalities

    CERN Multimedia

    HR & GS Departments

    2011-01-01

    As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...

  9. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  10. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    Science.gov (United States)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  11. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  12. Spatial Games Based on Pursuing the Highest Average Payoff

    Institute of Scientific and Technical Information of China (English)

    YANG Han-Xin; WANG Bing-Hong; WANG Wen-Xu; RONG Zhi-Hai

    2008-01-01

    We propose a strategy updating mechanism based on pursuing the highest average payoff to investigate the prisoner's dilemma game and the snowdrift game. We apply the new rule to investigate cooperative behaviours on regular, small-world, scale-free networks, and find spatial structure can maintain cooperation for the prisoner's dilemma game. In the snowdrift game, spatial structure can inhibit or promote cooperative behaviour which depends on payoff parameter. We further study cooperative behaviour on scale-free network in detail. Interestingly, non-monotonous behaviours observed on scale-free network with middle-degree individuals have the lowest cooperation level. We also find that large-degree individuals change their strategies more frequently for both games.

  13. Risk-sensitive reinforcement learning algorithms with generalized average criterion

    Institute of Scientific and Technical Information of China (English)

    YIN Chang-ming; WANG Han-xing; ZHAO Fei

    2007-01-01

    A new algorithm is proposed, which immolates the optimality of control policies potentially to obtain the robusticity of solutions. The robusticity of solutions maybe becomes a very important property for a learning system when there exists non-matching between theory models and practical physical system, or the practical system is not static,or the availability of a control action changes along with the variety of time. The main contribution is that a set of approximation algorithms and their convergence results are given. A generalized average operator instead of the general optimal operator max (or min) is applied to study a class of important learning algorithms, dynamic programming algorithms, and discuss their convergences from theoretic point of view. The purpose for this research is to improve the robusticity of reinforcement learning algorithms theoretically.

  14. Procedures for analyzing the effectiveness of siren systems for alerting the public

    Energy Technology Data Exchange (ETDEWEB)

    Keast, D.N.; Towers, D.A.; Anderson, G.S.; Kenoyer, J.L.; Desrosiers, A.E.

    1982-09-01

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations.

  15. Procedures for analyzing the effectiveness of siren systems for alerting the public

    International Nuclear Information System (INIS)

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations

  16. Hearing Office Average Processing Time Ranking Report, April 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  17. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  18. Tortuosity and the Averaging of Microvelocity Fields in Poroelasticity.

    Science.gov (United States)

    Souzanchi, M F; Cardoso, L; Cowin, S C

    2013-03-01

    The relationship between the macro- and microvelocity fields in a poroelastic representative volume element (RVE) has not being fully investigated. This relationship is considered to be a function of the tortuosity: a quantitative measure of the effect of the deviation of the pore fluid streamlines from straight (not tortuous) paths in fluid-saturated porous media. There are different expressions for tortuosity based on the deviation from straight pores, harmonic wave excitation, or from a kinetic energy loss analysis. The objective of the work presented is to determine the best expression for tortuosity of a multiply interconnected open pore architecture in an anisotropic porous media. The procedures for averaging the pore microvelocity over the RVE of poroelastic media by Coussy and by Biot were reviewed as part of this study, and the significant connection between these two procedures was established. Success was achieved in identifying the Coussy kinetic energy loss in the pore fluid approach as the most attractive expression for the tortuosity of porous media based on pore fluid viscosity, porosity, and the pore architecture. The fabric tensor, a 3D measure of the architecture of pore structure, was introduced in the expression of the tortuosity tensor for anisotropic porous media. Practical considerations for the measurement of the key parameters in the models of Coussy and Biot are discussed. In this study, we used cancellous bone as an example of interconnected pores and as a motivator for this study, but the results achieved are much more general and have a far broader application than just to cancellous bone. PMID:24891725

  19. 應用虛擬團隊於數位媒體設計之溝通策略與合作流程 A Study of Applying Virtual Team to the Communication Strategy and Procedure for Collaboration in a Digital Media Design Project

    Directory of Open Access Journals (Sweden)

    Wei-Ru Chen

    2003-12-01

    Full Text Available 藉由資訊傳播科技的應用,數位媒體設計得以虛擬團隊的合作方式,整合分散各地不同領域的專業人才,共同完成設計任務。本研究針對業界目前應用虛擬合作之數位媒體設計團隊進行個案訪談,探討設計團隊如何以虛擬合作之方式來進行設計活動,歸納出虛擬團隊的溝通策略與合作流程,並分析數位媒體設計團隊進行虛擬合作之優劣勢。研究結果顯示,運用虛擬團隊之合作方式確有其需求,然其必要性與效益則應考量三個主要面向:(一團隊建置的目標與成員架構,(二團隊連結所使用的工具與溝通資訊,以及(三團隊設計任務與虛擬合作流程。The development of information communication technology enables a digital media design project to apply virtual team to its communication strategy and procedure for collaboration. This study discussed the needs for building a virtual team of digital media design, and how it works. The researchers explored 4 cases to examine the problems faced by each team in the design process. The findings of this study showed that the concept of the virtual team applied to the digital media design is valid and effective. However, a successful virtual teamwork requires the following conditions: 1. welldefined team target and healthy member structure; 2. proper communication tools and design information; and 3. the well-organized procedure for collaboration.

  20. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author)

  1. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    Science.gov (United States)

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  2. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  3. An adjunctive minor surgical procedure for increased rate of retraction

    Directory of Open Access Journals (Sweden)

    Prabhakar Krishnan

    2013-01-01

    Full Text Available Introduction: Orthodontic treatment is based on the principle that if prolonged pressure is applied to the tooth, tooth movement will occur as the bone around the tooth re-models. In this study osteotomy of buccal alveolar plate and undermining of interseptal bone was performed at premolar extraction site and rate of en-masse retraction and canine retraction was evaluated. Materials and Methods: Patients between the age of 18 and 25 years, requiring retraction of anterior teeth are selected for the study. Osteotomy with undermining of interseptal bone at the extraction site was performed. The procedure was performed on all four quadrants. Results: The average retraction in the maxillary arch was 0.98 mm/quadrant in 3 weeks, i.e., a total retraction of 5.89 mm in a span of 9 weeks. The average retraction in the mandibular arch was 0.96 mm/quadrant in 3 weeks, i.e., a total retraction of 5.75 mm in a span of 9 weeks. Conclusion: This method of achieving faster en masse retraction immediately after extraction definitely reduced the initial retraction time. We recommend that such procedure must be carried out with appropriate anchorage conservation methods.

  4. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    Science.gov (United States)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  5. Comparison of conventional averaged and rapid averaged, autoregressive-based extracted auditory evoked potentials for monitoring the hypnotic level during propofol induction

    DEFF Research Database (Denmark)

    Litvan, Héctor; Jensen, Erik W; Galan, Josefina;

    2002-01-01

    The extraction of the middle latency auditory evoked potentials (MLAEP) is usually done by moving time averaging (MTA) over many sweeps (often 250-1,000), which could produce a delay of more than 1 min. This problem was addressed by applying an autoregressive model with exogenous input (ARX) that...

  6. Estimation of Q factors from reflection seismic data for a band-limited and stabilized inverse Q filter driven by an average-Q model

    Science.gov (United States)

    Chen, Zengbao; Chen, Xiaohong; Wang, Yanghua; Li, Jingye

    2014-02-01

    Reliable Q estimation is desirable for model-based inverse Q filtering to improve seismic resolution. On the one hand, conventional methods estimate Q from the amplitude spectra or frequency variations of individual wavelets at different depth (or time) levels, which is vulnerable to the effects of spectral interference and ambient noise. On the other hand, most inverse Q filtering algorithms are sensitive to noise, in order not to boost them, sometimes at the expense of degrading compensation effect. In this paper, the average-Q values are obtained from reflection seismic data based on the Gabor transform spectrum of a seismic trace. We transform the 2-D time-variant frequency spectrum into the 1-D spectrum, and then estimate the average-Q values based on the amplitude attenuation and compensation functions, respectively. Driven by the estimated average-Q model, we also develop a modified inverse Q filtering algorithm by incorporating a time-variant bandpass filter (TVBF), whose high cut off frequency follows a hyperbola along the traveltime from a specified time. Finally, we test this modified inverse Q filtering algorithm on synthetic data and perform the Q estimation procedure on a real reflection seismic data, followed by applying the modified inverse Q filtering algorithm. The synthetic data test and the real data example demonstrate that the algorithm driven by average-Q model may enhance the seismic resolution, without degrading the signal-to-noise ratio.

  7. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...

  8. 7 CFR 51.577 - Average midrib length.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  9. 7 CFR 760.640 - National average market price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  10. 40 CFR 80.67 - Compliance on average.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  11. Implementation of procedures to NPP Krsko INTRANEK

    International Nuclear Information System (INIS)

    Part of NEK documentation has already been presented on NEK Intranet such as USAR, Technical Specifications, QA Plan as well as some frequently used series of drawings. At the time being the process of presentation of all procedures (thereinafter INTRANEK procedures) is in progress. The purpose of this project is the presentation of 1600 procedures with average size of 30 pages what is more than 48000 pages altogether. ADOBE PDF (Portable Document Format) has been chosen as the most suitable format for the presentation of procedures on INTRANEK. PDF format meets the following criteria: the outlook of a document page is always the same as an original one and cannot be changed without control. In addition to this, full text search is available as well as easy jump from procedure to procedure. Some changes of working process on internal procedures have to be made before the project start, which determine the responsibility of individual users in the process. The work flow, which enables easy daily maintenance, has been prepared, the rules of both procedure numbering as well as folder contents/name have been set and the server selected. The project was managed and implemented with the extensive use of compute-aided management, document distribution and control, databases, electronics mail and Intranet tools. The results of practical implementation of NEK procedures and our experience with INTRANEK are presented in this paper.(author)

  12. Developing policies and procedures.

    Science.gov (United States)

    Randolph, Susan A

    2006-11-01

    The development of policies and procedures is an integral part of the occupational health nurse's role. Policies and procedures serve as the foundation for the occupational health service and are based on its vision, mission, culture, and values. The design and layout selected for the policies and procedures should be simple, consistent, and easy to use. The same format should be used for all existing and new policies and procedures. Policies and procedures should be reviewed periodically based on a specified time frame (i.e., annually). However, some policies may require a more frequent review if they involve rapidly changing external standards, ethical issues, or emerging exposures. PMID:17124968

  13. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  14. Readability of Special Education Procedural Safeguards

    Science.gov (United States)

    Mandic, Carmen Gomez; Rudd, Rima; Hehir, Thomas; Acevedo-Garcia, Dolores

    2012-01-01

    This study focused on literacy-related barriers to understanding the rights of students with disabilities and their parents within the special education system. SMOG readability scores were determined for procedural safeguards documents issued by all state departments of education. The average reading grade level was 16; 6% scored in the high…

  15. Application of Averaged Voronoi Polyhedron in the Modelling of Crystallisation of Eutectic Nodular Graphite Cast Iron

    OpenAIRE

    A. A. Burbelko; J. Początek; M. Królikowski

    2013-01-01

    The study presents a mathematical model of the crystallisation of nodular graphite cast iron. The proposed model is based on micro- andmacromodels, in which heat flow is analysed at the macro level, while micro level is used for modelling of the diffusion of elements. The use of elementary diffusion field in the shape of an averaged Voronoi polyhedron [AVP] was proposed. To determine the geometry of the averaged Voronoi polyhedron, Kolmogorov statistical theory of crystallisation was applied....

  16. Radiation exposure to staff and patients during two endocrinological procedures

    International Nuclear Information System (INIS)

    The purpose of the present work is to obtain information about the exposure to patient and staff during percutaneous nephrolithotripsy and ureteroscopy with intracorporeal lithotripsy and to search for a correlation between these parameters. The collected data for each procedure consist of the total air kerma-area product, PKA, cumulative dose, CD, fluoroscopy time, FT, number of images acquired, as well as clinical patient data. Average, minimum, maximum and median values were calculated for 38 patients. Mean values and median in parentheses were as follows: 355 (383) cGy cm2 (PKA for PCNL); 433 (286) cGy cm2 (PKA for URS); 42 (37) mGy (CD for PCNL); 12 (7) mGy (CD for URS); 3.5 (3.0) min (FT for PCNL); 1.4 (1.3) min (FT for URS). The typical operator doses for PCNL and URS were assessed to be 66.1 μSv and 34.3 μSv, respectively, while the maximum doses for the same type of procedures were 152.6 μSv and 124.1 μSv. Good correlation was observed between the staff dose and PKA for both procedures, while the correlation of staff dose with CD and FT was found to be weak. While applying principles of radiation protection and normal load in the clinic, there is no possibility to exceed the new annual dose limit for eye lens of 20 mSv per year averaged over 5 years. The correlation of PKA with FT and CD was also explored and no significant interconnection was observed. (authors)

  17. Model for the determination of instantaneous values of the velocity, instantaneous, and average acceleration for 100-m sprinters.

    Science.gov (United States)

    JanjiĆ, NataŠa J; Kapor, Darko V; Doder, Dragan V; Doder, Radoslava Z; SaviĆ, Biljana V

    2014-12-01

    Temporal patterns of running velocity is of profound interest for coaches and researchers involved in sprint racing. In this study, we applied a nonhomogeneous differential equation for the motion with resistance force proportional to the velocity for the determination of the instantaneous velocity and instantaneous and average acceleration in the sprinter discipline at 100 m. Results obtained for the instantaneous velocity in this study using the presented model indicate good agreement with values measured directly, which is a good verification of the proposed procedure. To perform a comprehensive analysis of the applicability of the results obtained, the harmonic canon of running for the 100-m sprint discipline was formed. Using the data obtained by the measurement of split times for segments of 100-m run of the sprinters K. Lewis (1988), M. Green (2001), and U. Bolt (2009), the method described yielded results that enable comparative analysis of the kinematical parameters for each sprinter. Further treatment allowed the derivation of the ideal harmonic velocity canon of running, which can be helpful to any coach in evaluating the results achieved at particular distances in this and other disciplines. The method described can be applied for the analysis of any race.

  18. The average crossing number of equilateral random polygons

    Science.gov (United States)

    Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.

    2003-11-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.

  19. Averaging Tesseral Effects: Closed Form Relegation versus Expansions of Elliptic Motion

    Directory of Open Access Journals (Sweden)

    Martin Lara

    2013-01-01

    Full Text Available Longitude-dependent terms of the geopotential cause nonnegligible short-period effects in orbit propagation of artificial satellites. Hence, accurate analytical and semianalytical theories must cope with tesseral harmonics. Modern algorithms for dealing analytically with them allow for closed form relegation. Nevertheless, current procedures for the relegation of tesseral effects from subsynchronous orbits are unavoidably related to orbit eccentricity, a key fact that is not enough emphasized and constrains application of this technique to small and moderate eccentricities. Comparisons with averaging procedures based on classical expansions of elliptic motion are carried out, and the pros and cons of each approach are discussed.

  20. 预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果%Foresee sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect

    Institute of Scientific and Technical Information of China (English)

    岳利莹; 马冬萍; 姜培英

    2014-01-01

    Objective The investigation foresees sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect. Methods The sufferer who random selects by examinations my hospital to carry on surgical operation at my ophthalmology out-patient service in the hospital from November, 2011 to November, 2012 is 222, is divided into its nursing set and matched control.Foresee sex nursing procedure to sufferer’s adoption of nursing set, adopt normal regulations nursing procedure to the sufferer of matched control.Compare two sets of sufferers the mental state before doing surgical operations, the blood pressure variety in the surgical operation process, mindset variety still has after the surgical operation ends whether appear a dizzy Jue phenomenon. Results Nursing set sufferer of of the blood pressure value, mental condition still has already appeared dizzy Jue all the rate once was good friends with a matched control, and the its difference has statistics to learn meaning. Conclusion To ophthalmology out-patient service surgical operation the sufferer carry on foresee sex nursing intervention can improve a sufferer of mindset, ease the nervous feeling of the sufferer to the surgical operation, effectively lower a sufferer to take place dizzy Jue of all rate.%目的:探究预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果。方法:随机选取我院2011年11月到2012年11月在我院眼科门诊进行手术的患者222例,将其分为护理组和对照组。对护理组的患者采用预见性护理程序,对对照组患者采用常规护理程序。比较两组患者在执行手术前的精神状态,手术过程中的血压变化、心态变化还有在手术结束后是否出现晕厥现象。结果:护理组患者的的血压值、心理状况还有出现晕厥的概率要好过对照组,且其差异性具有统计学意义。结论:对眼科门诊手术患者

  1. Foresee sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect%预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果

    Institute of Scientific and Technical Information of China (English)

    岳利莹; 马冬萍; 姜培英

    2014-01-01

    Objective The investigation foresees sex nursing procedure at prevent ophthalmology out-patient service from surgical operation dizzy Jue of sufferer in of applied effect. Methods The sufferer who random selects by examinations my hospital to carry on surgical operation at my ophthalmology out-patient service in the hospital from November, 2011 to November, 2012 is 222, is divided into its nursing set and matched control.Foresee sex nursing procedure to sufferer’s adoption of nursing set, adopt normal regulations nursing procedure to the sufferer of matched control.Compare two sets of sufferers the mental state before doing surgical operations, the blood pressure variety in the surgical operation process, mindset variety still has after the surgical operation ends whether appear a dizzy Jue phenomenon. Results Nursing set sufferer of of the blood pressure value, mental condition still has already appeared dizzy Jue all the rate once was good friends with a matched control, and the its difference has statistics to learn meaning. Conclusion To ophthalmology out-patient service surgical operation the sufferer carry on foresee sex nursing intervention can improve a sufferer of mindset, ease the nervous feeling of the sufferer to the surgical operation, effectively lower a sufferer to take place dizzy Jue of all rate.%目的:探究预见性护理程序在预防眼科门诊手术患者晕厥中的应用效果。方法:随机选取我院2011年11月到2012年11月在我院眼科门诊进行手术的患者222例,将其分为护理组和对照组。对护理组的患者采用预见性护理程序,对对照组患者采用常规护理程序。比较两组患者在执行手术前的精神状态,手术过程中的血压变化、心态变化还有在手术结束后是否出现晕厥现象。结果:护理组患者的的血压值、心理状况还有出现晕厥的概率要好过对照组,且其差异性具有统计学意义。结论:对眼科门诊手术患者

  2. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  3. Applied large eddy simulation.

    Science.gov (United States)

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  4. Optimization of CT procedures

    International Nuclear Information System (INIS)

    Full text: In recent years computed tomography (CT) became a powerful diagnostic method. The technology advances in CT allowed improvement of the image quality, but set a number of challenges to all professionals working in the field of diagnostic imaging. CT capabilities expand, and thus increase the need for better training and qualification of staff and engineers responsible for the optimal functioning of the CT system. Despite the variety of technical innovations for dose reduction, obtaining of images with good diagnostic quality is often associated with increased dose. There is a lack of consensus in radiological practice about the use of contrast media and the image quality requirements. A common opinion is that the manufacturer setting is optimal. All this leads to large variations of doses for the same examinations and shows the need to optimize procedures. Learning objectives: optimization is not a single act but a process involving all the experts conducting the study. The presence of qualified medical physicist as a part of the team is important. Responsibility of the team is to choose which procedures have to be optimized. In most cases, the choice is between the most common procedures performed and those suspected in the diagnostic value of the image. Special attention should be paid to the study of children in order to exclude the use of protocols for adults, which leads to unnecessary high doses.each member of the team must be aware of the relationship between patient dose and image quality. the choice how to conduct the study depends on the diagnostic purpose. In some examinations, such as CT urography, obtaining high quality image at the cost of a higher dose does not improve the diagnostic outcome. On the other hand optimization is not necessarily associated with dose reduction. In cases where the images are not with adequate diagnostic quality, it is necessary to use a CT protocol with higher dose. Several international documents show the

  5. 40 CFR 401.13 - Test procedures for measurement.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Test procedures for measurement. 401.13... AND STANDARDS GENERAL PROVISIONS § 401.13 Test procedures for measurement. The test procedures for measurement which are prescribed at part 136 of this chapter shall apply to expressions of pollutant...

  6. Pyroshock prediction procedures

    Science.gov (United States)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  7. Finite element procedures

    CERN Document Server

    Bathe, Klaus-Jürgen

    2015-01-01

    Finite element procedures are now an important and frequently indispensable part of engineering analyses and scientific investigations. This book focuses on finite element procedures that are very useful and are widely employed. Formulations for the linear and nonlinear analyses of solids and structures, fluids, and multiphysics problems are presented, appropriate finite elements are discussed, and solution techniques for the governing finite element equations are given. The book presents general, reliable, and effective procedures that are fundamental and can be expected to be in use for a long time. The given procedures form also the foundations of recent developments in the field.

  8. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  9. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  10. Average American 15 Pounds Heavier Than 20 Years Ago

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  11. Emotional Value of Applied Textiles

    DEFF Research Database (Denmark)

    Bang, Anne Louise

    2011-01-01

    The present PhD thesis is conducted as an Industrial PhD project in collaboration with the Danish company Gabriel A/S (Gabriel), which designs and produces furniture textiles and ‘related products’ for manufacturers of furniture. A ‘related textile product’ is e.g. processing of piece goods......, upholstery, mounting etc. This PhD project addresses the challenges of the textile industry, where the global knowledge economy increasingly forces companies to include user-participation and value innovation in their product development. My project revolves around the challenges which the textile designers...... of applied textiles. The objective is to operationalise the strategic term ‘emotional value’ as it relates to applied textiles. The procedure includes the development of user- and stakeholder-centred approaches, which are valuable for the textile designer in the design process. The research approach...

  12. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  13. The SU(N) Wilson Loop Average in 2 Dimensions

    OpenAIRE

    Karjalainen, Esa

    1993-01-01

    We solve explicitly a closed, linear loop equation for the SU(2) Wilson loop average on a two-dimensional plane and generalize the solution to the case of the SU(N) Wilson loop average with an arbitrary closed contour. Furthermore, the flat space solution is generalized to any two-dimensional manifold for the SU(2) Wilson loop average and to any two-dimensional manifold of genus 0 for the SU(N) Wilson loop average.

  14. Average of Distribution and Remarks on Box-Splines

    Institute of Scientific and Technical Information of China (English)

    LI Yue-sheng

    2001-01-01

    A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.

  15. Investigating Averaging Effect by Using Three Dimension Spectrum

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.

  16. Analytic continuation by averaging Padé approximants

    Science.gov (United States)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  17. Model characteristics of average skill boxers’ competition functioning

    Directory of Open Access Journals (Sweden)

    Martsiv V.P.

    2015-08-01

    Full Text Available Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round. Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. It has been established that sportsmanship of boxers manifests as increase of punches’ density in a fight. It has also been found that increase of coefficient of punches’ effectiveness results in expansion of arsenal of technical-tactic actions. Importance of consideration of standard specialized loads has been confirmed. Conclusions: we have recommended means to be applied in training process at this stage of training. On the base of our previous researches we have made recommendations on complex assessment of sportsmen-students’ skillfulness. Besides, we have shown approaches to improvement of different sides of sportsmen’s fitness.

  18. Near-elastic vibro-impact analysis by discontinuous transformations and averaging

    OpenAIRE

    Thomsen, Jon Juel; Fidlin, Alexander

    2008-01-01

    We show how near-elastic vibro-impact problems, linear or nonlinear in-between impacts, can be conveniently analyzed by a discontinuity-reducing transformation of variables combined with an extended averaging procedure. A general technique for this is presented, and illustrated by calculating transient or stationary motions for different harmonic oscillators with stops or clearances, and self-excited friction oscillators with stops or clearances First- and second-order analytical predictions ...

  19. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...

  20. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  1. Evaluation of the average ion approximation for a tokamak plasma

    International Nuclear Information System (INIS)

    The average ion approximation, sometimes used to calculated atomic processes in plasmas, is assessed by computing deviations in various rates over a set of conditions representative of tokamak edge plasmas. Conditions are identified under which the rates are primarily a function of the average ion charge and plasma parameters, as assumed in the average ion approximation. (Author) 19 refs., tab., 5 figs

  2. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...

  3. 27 CFR 19.37 - Average effective tax rate.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  4. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  5. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  6. 7 CFR 1410.44 - Average adjusted gross income.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  7. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  8. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....

  9. 34 CFR 668.196 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  10. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  11. 34 CFR 668.215 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  12. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  13. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  14. Actor-Network Procedures

    NARCIS (Netherlands)

    Pavlovic, Dusko; Meadows, Catherine; Ramanujam, R.; Ramaswamy, Srini

    2012-01-01

    In this paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices, where these new procedures run; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks, as an extension o

  15. Electron-ion collisions in the average-configuration distorted-wave approximation

    International Nuclear Information System (INIS)

    Explicit expressions for the electron-impact excitation, ionization, and resonant-recombination cross sections are derived in the average-configuration distorted-wave approximation. Calculations using these expressions are applied to several types of phenomena in electron-ion scattering where comparison with other theoretical methods and experimental measurements can be made. 24 refs., 5 figs

  16. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    International Nuclear Information System (INIS)

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  17. 75 FR 69591 - Medicaid Program; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug...

    Science.gov (United States)

    2010-11-15

    ...; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug Definition, and Upper Limits... ``Definitions'' was intended to apply to both AMP and best price calculations. While the Determination of AMP... Price (Sec. 447.505). Therefore, we see no need to withdraw the definition of bona fide service fees....

  18. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)

    2012-02-15

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  19. Surface Representation of Polycrystal Physical Properties: All Crystal Classes, Simple Average Approximation

    OpenAIRE

    Raymond, O.; Fuentes, L. (Lidia); Gómez, J. I.

    1996-01-01

    Algorithms for polycrystal physical properties estimation are presented. Bunge's spherical harmonics treatment of surface representations, under simple average approximation, is applied. Specific formulae for so-called longitudinal magnitudes are given. Physical properties associated to tensors of second-, third- and fourth-rank are considered. All crystal and sample symmetries are covered.

  20. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You;

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...

  1. Average optimization of the approximate solution of operator equations and its application

    Institute of Scientific and Technical Information of China (English)

    王兴华; 马万

    2002-01-01

    In this paper, a definition of the optimization of operator equations in the average case setting is given. And the general result (Theorem 1) about the relevant optimization problem is obtained. This result is applied to the optimization of approximate solution of some classes of integral equations.

  2. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    Directory of Open Access Journals (Sweden)

    Pawel Szczesniak

    2015-03-01

    Full Text Available In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two modelling techniques are applied to a same AC-AC converter called matrix-reactance frequency converter based on buck-boost topology. These techniques are compared on the basis of their rapidity, quantity of calculations and transformations and its limitations.

  3. Evaluation of annual average equivalent dose of workers for nuclear medicine facilities in the Northeast Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Lira, Renata F.; Silva Neto, Jose Almeida; Antonio Filho, Joao, E-mail: jaf@ufpe.br [Universidade Federal de Pernambuco (UFPE/DEN), Departamento de Energia Nuclear, Recife, PE (Brazil); Santos, Luiz A.P., E-mail: lasantos@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2011-07-01

    Nuclear Medicine (NM) is a radiation technique normally used to make therapeutic treatments or diagnosis. In this technique a small quantity of radioactive material combined with drugs is used to have the diagnostic images. Any activity involving ionizing radiation should be justified and it must have its working procedures to be optimized. The purpose of this paper is show the importance of optimization of the radiation protection systems and determines an optimal dose for occupational people in nuclear medicine. Such an optimization aims to avoid any possible contamination or accidents, and reduce costs of protection. The optimization for a service which manipulates ionizing radiation can be done using different techniques, and among other, we can mention the technique of expanded cost-benefit analysis. The data collection was divided into the equivalent dose annual average and the equivalent dose average in period. The database for this study was a survey of received doses from 87 occupational people of 10 nuclear medicine facilities in the northeast Brazil and it was made in a period of 13 years (1979-1991). The results show that the equivalent dose average in the period H was 2.39 mSv. Actually, since 1992 the analysis is in progress and it shows that equivalent dose annual average could reduce even more if procedures of work are followed correctly. (author)

  4. 49 CFR 531.6 - Measurement and calculation procedures.

    Science.gov (United States)

    2010-10-01

    ... the Act and set forth in 40 CFR part 600. (b) A manufacturer that is eligible to elect a model year in... 49 Transportation 6 2010-10-01 2010-10-01 false Measurement and calculation procedures. 531.6... STANDARDS § 531.6 Measurement and calculation procedures. (a) The average fuel economy of all...

  5. LEVERAGE EFFECT FORECAST FOR THE YEAR 2014 THROUGH THE MOVING AVERAGE METHOD

    Directory of Open Access Journals (Sweden)

    HADA TEODOR

    2015-03-01

    Full Text Available It is very important for the proper development of various financial and economic activities to be an achievable goal. This can be determined by a forecast of a phenomenon in order to know where they could hover value. This paper is structured in three parts. The first part highlights the theoretical aspects of using the moving average method for determining the prognosis of a given phenomenon. The second part presents in detail the steps to follow within the moving average method. The phenomenon analyzed in this study is the leverage effect. We examine each step of the process, which will ultimately lead to a more precise forecast for the leverage effect. Each stage of the procedure is analyzed, which will lead in the end to a most accurate prognosis of the leverage effect. At the end of this paper, findings of practically using the moving average method, for establishing the forecast and its subsequent interpretations, will be presented.

  6. Optimal Weights of Certain Branches of an Arbitrary Connected Network for Fastest Distributed Consensus Averaging Problem

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest distributed consensus averaging problem over networks with different topologies has been an active area of research for a number of years. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only communicate with their neighbors. In the previous works full knowledge about the network's topology was required for finding optimal weights and convergence rate of network, but here in this work for the first time the optimal weights are determined analytically for the edges of certain types of branches, namely path branch, lollipop branch, semi-complete Branch and Ladder branch independent of the rest of network. The solution procedure consists of stratification of associated connectivity graph of branch and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness c...

  7. Applied iterative methods

    CERN Document Server

    Hageman, Louis A

    2004-01-01

    This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a "red/black" block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and comp

  8. Essays in Applied Microeconomics

    Science.gov (United States)

    Severnini, Edson Roberto

    This dissertation consists of three studies analyzing causes and consequences of location decisions by economic agents in the U.S. In Chapter 1, I address the longstanding question of the extent to which the geographic clustering of economic activity may be attributable to agglomeration spillovers as opposed to natural advantages. I present evidence on this question using data on the long-run effects of large scale hydroelectric dams built in the U.S. over the 20th century, obtained through a unique comparison between counties with or without dams but with similar hydropower potential. Until mid-century, the availability of cheap local power from hydroelectric dams conveyed an important advantage that attracted industry and population. By the 1950s, however, these advantages were attenuated by improvements in the efficiency of thermal power generation and the advent of high tension transmission lines. Using a novel combination of synthetic control methods and event-study techniques, I show that, on average, dams built before 1950 had substantial short run effects on local population and employment growth, whereas those built after 1950 had no such effects. Moreover, the impact of pre-1950 dams persisted and continued to grow after the advantages of cheap local hydroelectricity were attenuated, suggesting the presence of important agglomeration spillovers. Over a 50 year horizon, I estimate that at least one half of the long run effect of pre-1950 dams is due to spillovers. The estimated short and long run effects are highly robust to alternative procedures for selecting synthetic controls, to controls for confounding factors such as proximity to transportation networks, and to alternative sample restrictions, such as dropping dams built by the Tennessee Valley Authority or removing control counties with environmental regulations. I also find small local agglomeration effects from smaller dam projects, and small spillovers to nearby locations from large dams. Lastly

  9. Vibrational resonance: a study with high-order word-series averaging

    CERN Document Server

    Murua, Ander

    2016-01-01

    We study a model problem describing vibrational resonance by means of a high-order averaging technique based on so-called word series. With the tech- nique applied here, the tasks of constructing the averaged system and the associ- ated change of variables are divided into two parts. It is first necessary to build recursively a set of so-called word basis functions and, after that, all the required manipulations involve only scalar coefficients that are computed by means of sim- ple recursions. As distinct from the situation with other approaches, with word- series, high-order averaged systems may be derived without having to compute the associated change of variables. In the system considered here, the construction of high-order averaged systems makes it possible to obtain very precise approxima- tions to the true dynamics.

  10. Level Crossing Rate and Average Fade Duration of EGC Systems with Cochannel Interference in Rayleigh Fading

    CERN Document Server

    Hadzi-Velkov, Zoran

    2009-01-01

    Both the first-order signal statistics (e.g. the outage probability) and the second-order signal statistics (e.g. the average level crossing rate, LCR, and the average fade duration, AFD) are important design criteria and performance measures for the wireless communication systems, including the equal gain combining (EGC) systems in presence of the cochannel interference (CCI). Although the analytical expressions for the outage probability of the coherent EGC systems exposed to CCI and various fading channels are already known, the respective ones for the average LCR and the AFD are not available in the literature. This paper presents such analytical expressions for the Rayleigh fading channel, which are obtained by utilizing a novel analytical approach that does not require the explicit expression for the joint PDF of the instantaneous output signal-to-interference ratio (SIR) and its time derivative. Applying the characteristic function method and the Beaulieu series, we determined the average LCR and the A...

  11. A Framework for Control System Design Subject to Average Data-Rate Constraints

    DEFF Research Database (Denmark)

    Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2011-01-01

    This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has...... to be deployed in the feedback path. For this situation, and by focusing on a class of source-coding schemes built around entropy coded dithered quantizers, we develop a framework to deal with average data-rate constraints in a tractable manner that combines ideas from both information and control theories....... As an illustration of the uses of our framework, we apply it to study the interplay between stability and average data-rates in the considered architecture. It is shown that the proposed class of coding schemes can achieve mean square stability at average data-rates that are, at most, 1.254 bits per sample away from...

  12. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    The average concentrations of 226Ra and 210Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226Ra and 210Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210Bi beta emissions with a Geiger Muller Counter. (M.A.C.)

  13. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  14. Average value of correlated time series, with applications in dendroclimatology and hydrometeorology

    Energy Technology Data Exchange (ETDEWEB)

    Wigley, T.M.L.; Briffa, K.R.; Jones, P.D.

    1984-02-01

    In a number of areas of applied climatology, time series are either averaged to enhance a common underlying signal or combined to produce area averages. How well, then, does the average of a finite number (N) of time series represent the population average, and how well will a subset of series represent the N-series average. We have answered these questions by deriving formulas for 1) the correlation coefficient between the average of N time series and the average of n such series (where n is an arbitrary subset of N) and 2) the correlation between the N-series average and the population. We refer to these mean correlations as the subsammple signal strength (SSS) and the expressed population signal (EPS). They may be expressed in terms of the mean interseries correlation coefficient r-barm as SSS = (R-bar/sub n/,N)/sup 2/roughly-equaln(1+(N-1)r-bar)/N(1+(n+1)r-bar), EPS = (R-bar/sub N/)/sup 2/roughly-equalNr-bar/1+(N-1)r-bar. Similar formulas are given relating these mean correlations to the fractional common variance which arises as a parameter in analysis of variance. These results are applied to determine the increased uncertainty in a tree-ring chronology which results when the number of cores used to produce the chronology is reduced. Such uncertainty will accrue to any climate reconstruction equation that is calibrated using the most recent part of the chronology. The method presented can be used to define the useful length of tree-ring chronologies for climate reconstruction work.

  15. Averaging and exact perturbations in LTB dust models

    CERN Document Server

    Sussman, Roberto A

    2012-01-01

    We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...

  16. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang changxue; Su na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  17. Apply the Communicative Approach in Listening Class

    Institute of Scientific and Technical Information of China (English)

    Wang; changxue; Su; na

    2014-01-01

    Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’ communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.

  18. A Primer on Disseminating Applied Quantitative Research

    Science.gov (United States)

    Bell, Bethany A.; DiStefano, Christine; Morgan, Grant B.

    2010-01-01

    Transparency and replication are essential features of scientific inquiry, yet scientific communications of applied quantitative research are often lacking in much-needed procedural information. In an effort to promote researchers dissemination of their quantitative studies in a cohesive, detailed, and informative manner, the authors delineate…

  19. Evaluation of the occupational dose in hemodynamic procedures

    International Nuclear Information System (INIS)

    The purpose of this study was to evaluate the dose received by health professionals in a hemodynamic service. It was necessary to know the profile of these professional, to carry out a survey the occupational external doses during the years 2000 to 2009 and to evaluate the distribution of the effective dose from the special procedures guided by fluoroscopy. A self-applied questionnaire was used to delineate the profile of health professionals, taking into account variables such as gender, age, individual monitoring time, number of jobs and tasks performed in the sector. In addition, it was performed an examination of the external individual monitoring doses from the records of the institution. The sample was composed of 35 professionals, 11 males and 24 females, with mean age of (43.0 +- 10.4) years. The average monitoring time of individuals analyzed within the institution was (11.3 +- 9.1) years, considering the period before the study. The minimum record dose level was 0.2 mSv and the maximum dose was 22.7 mSv. Doctors and nursing assistants were the professionals more exposed to radiation, due probably remaining closer to the examination table and X-ray tube during the interventional procedure. (author)

  20. Average-Consensus Algorithms in a Deterministic Framework

    OpenAIRE

    Topley, Kevin; Krishnamurthy, Vikram

    2011-01-01

    We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus are proposed. Necessary and sufficient communication conditions are given for each algorithm to achieve average-consensus. Resource costs for each algorithm are derived based on the number of scalar values that are required for communication and ...

  1. On the average crosscap number Ⅱ: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    Yi-chao CHEN; Yan-pei LIU

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  2. On the average crosscap number II: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  3. Orbit-averaged Guiding-center Fokker-Planck Operator

    CERN Document Server

    Brizard, A J; Decker, J; Duthoit, F -X

    2009-01-01

    A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant $\\ov{\\psi}$, the minimum-B pitch-angle coordinate $\\xi_{0}$, and the momentum magnitude $p$.

  4. IC Treatment: Surgical Procedures

    Science.gov (United States)

    ... surgeon fashions a tube or conduit from a short section of bowel and places the ureters (which carry urine from ... this procedure, some patients will continue to experience symptoms of ... augmented bowel segment of these newly fashioned bladders. Some patients ...

  5. Dynamic alarm response procedures

    International Nuclear Information System (INIS)

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as ApacheR, IISR, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as NetscapeR, Microsoft Internet ExplorerR, Mozilla FirefoxR, OperaR, and others. (authors)

  6. Procedures for Sampling Vegetation

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This report outlines vegetation sampling procedures used on various refuges in Region 3. The importance of sampling the response of marsh vegetation to management...

  7. Anxiety Around Medical Procedures

    Science.gov (United States)

    ... Kidney/Wilms Tumor Liver Cancer Lymphoma (Non-Hodgkin) Lymphoma (Hodgkin) Neuroblastoma Osteosarcoma Retinoblastoma Rhabdomyosarcoma Skin Cancer Soft Tissue Sarcoma Thyroid Cancer Understanding Children's Cancer Anxiety Around Procedures Childhood Cancer Statistics Late ...

  8. Tests and Procedures

    Science.gov (United States)

    ... procedure is being done. How the results will influence treatment. What your child will experience during the ... Understanding Children’s Cancer About Cancer What is Cancer? Childhood Cancer Statistics Childhood Cancer Statistics Overview Number of ...

  9. Cosmetic Procedure Questions

    Science.gov (United States)

    ... How to Choose the Best Skin Care Products Cosmetic Procedure Questions Want to look younger? Start by ... fillers, neuromodulators (Botox) and hair restoration among others. Cosmetic Questionnaire Print out this PDF version to take ...

  10. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  11. Practical definition of averages of tensors in general relativity

    CERN Document Server

    Boero, Ezequiel F

    2016-01-01

    We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.

  12. Costing imaging procedures.

    Science.gov (United States)

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241

  13. Variational theory of average-atom and superconfigurations in quantum plasmas.

    Science.gov (United States)

    Blenski, T; Cichocki, B

    2007-05-01

    Models of screened ions in equilibrium plasmas with all quantum electrons are important in opacity and equation of state calculations. Although such models have to be derived from variational principles, up to now existing models have not been fully variational. In this paper a fully variational theory respecting virial theorem is proposed-all variables are variational except the parameters defining the equilibrium, i.e., the temperature T, the ion density ni and the atomic number Z. The theory is applied to the quasiclassical Thomas-Fermi (TF) atom, the quantum average atom (QAA), and the superconfigurations (SC) in plasmas. Both the self-consistent-field (SCF) equations for the electronic structure and the condition for the mean ionization Z* are found from minimization of a thermodynamic potential. This potential is constructed using the cluster expansion of the plasma free energy from which the zero and the first-order terms are retained. In the zero order the free energy per ion is that of the quantum homogeneous plasma of an unknown free-electron density n0 = Z* ni occupying the volume 1/ni. In the first order, ions submerged in this plasma are considered and local neutrality is assumed. These ions are considered in the infinite space without imposing the neutrality of the Wigner-Seitz (WS) cell. As in the Inferno model, a central cavity of a radius R is introduced, however, the value of R is unknown a priori. The charge density due to noncentral ions is zero inside the cavity and equals en0 outside. The first-order contribution to free energy per ion is the difference between the free energy of the system "central ion+infinite plasma" and the free energy of the system "infinite plasma." An important part of the approach is an "ionization model" (IM), which is a relation between the mean ionization charge Z* and the first-order structure variables. Both the IM and the local neutrality are respected in the minimization procedure. The correct IM in the TF case

  14. Variational theory of average-atom and superconfigurations in quantum plasmas.

    Science.gov (United States)

    Blenski, T; Cichocki, B

    2007-05-01

    Models of screened ions in equilibrium plasmas with all quantum electrons are important in opacity and equation of state calculations. Although such models have to be derived from variational principles, up to now existing models have not been fully variational. In this paper a fully variational theory respecting virial theorem is proposed-all variables are variational except the parameters defining the equilibrium, i.e., the temperature T, the ion density ni and the atomic number Z. The theory is applied to the quasiclassical Thomas-Fermi (TF) atom, the quantum average atom (QAA), and the superconfigurations (SC) in plasmas. Both the self-consistent-field (SCF) equations for the electronic structure and the condition for the mean ionization Z* are found from minimization of a thermodynamic potential. This potential is constructed using the cluster expansion of the plasma free energy from which the zero and the first-order terms are retained. In the zero order the free energy per ion is that of the quantum homogeneous plasma of an unknown free-electron density n0 = Z* ni occupying the volume 1/ni. In the first order, ions submerged in this plasma are considered and local neutrality is assumed. These ions are considered in the infinite space without imposing the neutrality of the Wigner-Seitz (WS) cell. As in the Inferno model, a central cavity of a radius R is introduced, however, the value of R is unknown a priori. The charge density due to noncentral ions is zero inside the cavity and equals en0 outside. The first-order contribution to free energy per ion is the difference between the free energy of the system "central ion+infinite plasma" and the free energy of the system "infinite plasma." An important part of the approach is an "ionization model" (IM), which is a relation between the mean ionization charge Z* and the first-order structure variables. Both the IM and the local neutrality are respected in the minimization procedure. The correct IM in the TF case

  15. A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport

    Directory of Open Access Journals (Sweden)

    Gilberto Espinosa-Paredes

    2012-01-01

    Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.

  16. Advances in Applied Mechanics

    OpenAIRE

    2014-01-01

    Advances in Applied Mechanics draws together recent significant advances in various topics in applied mechanics. Published since 1948, Advances in Applied Mechanics aims to provide authoritative review articles on topics in the mechanical sciences, primarily of interest to scientists and engineers working in the various branches of mechanics, but also of interest to the many who use the results of investigations in mechanics in various application areas, such as aerospace, chemical, civil, en...

  17. Perspectives on Applied Ethics

    OpenAIRE

    2007-01-01

    Applied ethics is a growing, interdisciplinary field dealing with ethical problems in different areas of society. It includes for instance social and political ethics, computer ethics, medical ethics, bioethics, envi-ronmental ethics, business ethics, and it also relates to different forms of professional ethics. From the perspective of ethics, applied ethics is a specialisation in one area of ethics. From the perspective of social practice applying eth-ics is to focus on ethical aspects and ...

  18. Applied Neuroscience Laboratory Complex

    Data.gov (United States)

    Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....

  19. 7 CFR 1786.55 - Application procedure.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURE (CONTINUED) PREPAYMENT OF RUS GUARANTEED AND INSURED LOANS TO ELECTRIC AND TELEPHONE BORROWERS Special Discounted Prepayments on RUS Direct/Insured Loans § 1786.55 Application procedure. Any borrower seeking to prepay its RUS Notes under this subpart should apply to the appropriate RUS Area Director...

  20. 40 CFR 791.31 - Expedited procedures.

    Science.gov (United States)

    2010-07-01

    ... American Arbitration Association in its discretion determines otherwise, the Expedited Procedures described in this section shall be applied in any case where the total claim of any party does not exceed $5..., § 791.31. (b) Notice by telephone. The parties shall accept all notices from the American...

  1. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  2. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  3. A Characterization of the average tree solution for tree games

    OpenAIRE

    Debasis Mishra; Dolf Talman

    2009-01-01

    For the class of tree games, a new solution called the average tree solution has been proposed recently. We provide a characterization of this solution. This characterization underlines an important difference, in terms of symmetric treatment of the agents, between the average tree solution and the Myerson value for the class of tree games.

  4. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  5. 7 CFR 701.17 - Average adjusted gross income limitation.

    Science.gov (United States)

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  6. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...

  7. (Average-) convexity of common pool and oligopoly TU-games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Meinhardt, H.

    2000-01-01

    The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele

  8. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    蒋艳杰

    2000-01-01

    This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  9. Remarks on the Lower Bounds for the Average Genus

    Institute of Scientific and Technical Information of China (English)

    Yi-chao Chen

    2011-01-01

    Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.

  10. Simulation of Synthetic Jets in Quiescent Air Using Unsteady Reynolds Averaged Navier-Stokes Equations

    Science.gov (United States)

    Vatsa, Veer N.; Turkel, Eli

    2006-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.

  11. Marginal Cost Versus Average Cost Pricing with Climatic Shocks in Senegal: A Dynamic Computable General Equilibrium Model Applied to Water

    OpenAIRE

    Briand, Anne

    2006-01-01

    The model simulates on a 20-year horizon, a first phase of increase in the water resource availability taking into account the supply policies by the Senegalese government and a second phase with hydrologic deficits due to demand evolution (demographic growth). The results show that marginal cost water pricing (with a subsidy ensuring the survival of the water production sector) makes it possible in the long term to absorb the shock of the resource shortage, GDP, investment and welfare increa...

  12. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    Energy Technology Data Exchange (ETDEWEB)

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  13. On the extremal properties of the average eccentricity

    CERN Document Server

    Ilic, Aleksandar

    2011-01-01

    The eccentricity of a vertex is the maximum distance from it to another vertex and the average eccentricity $ecc (G)$ of a graph $G$ is the mean value of eccentricities of all vertices of $G$. The average eccentricity is deeply connected with a topological descriptor called the eccentric connectivity index, defined as a sum of products of vertex degrees and eccentricities. In this paper we analyze extremal properties of the average eccentricity, introducing two graph transformations that increase or decrease $ecc (G)$. Furthermore, we resolve four conjectures, obtained by the system AutoGraphiX, about the average eccentricity and other graph parameters (the clique number, the Randi\\' c index and the independence number), refute one AutoGraphiX conjecture about the average eccentricity and the minimum vertex degree and correct one AutoGraphiX conjecture about the domination number.

  14. Average cross-responses in correlated financial markets

    Science.gov (United States)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  15. Mobile Energy Laboratory Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.

    1993-09-01

    Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.

  16. Applied tensor stereology

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel

    In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle...

  17. What are applied ethics?

    Science.gov (United States)

    Allhoff, Fritz

    2011-03-01

    This paper explores the relationships that various applied ethics bear to each other, both in particular disciplines and more generally. The introductory section lays out the challenge of coming up with such an account and, drawing a parallel with the philosophy of science, offers that applied ethics may either be unified or disunified. The second section develops one simple account through which applied ethics are unified, vis-à-vis ethical theory. However, this is not taken to be a satisfying answer, for reasons explained. In the third section, specific applied ethics are explored: biomedical ethics; business ethics; environmental ethics; and neuroethics. These are chosen not to be comprehensive, but rather for their traditions or other illustrative purposes. The final section draws together the results of the preceding analysis and defends a disunity conception of applied ethics.

  18. Anisotropy of the solar network magnetic field around the average supergranule

    CERN Document Server

    Langfellner, J; Birch, A C

    2015-01-01

    Supergranules in the quiet Sun are outlined by a web-like structure of enhanced magnetic field strength, the so-called magnetic network. We aim to map the magnetic network field around the average supergranule near disk center. We use observations of the line-of-sight component of the magnetic field from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). The average supergranule is constructed by coaligning and averaging over 3000 individual supergranules. We determine the positions of the supergranules with an image segmentation algorithm that we apply on maps of the horizontal flow divergence measured using time-distance helioseismology. In the center of the average supergranule the magnetic (intranetwork) field is weaker by about 2.2 Gauss than the background value (3.5 Gauss), whereas it is enhanced in the surrounding ring of horizontal inflows (by about 0.6 Gauss on average). We find that this network field is significantly stronger west (prograde) of the average sup...

  19. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    Science.gov (United States)

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  20. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  1. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... must meet the minimum driving range requirements established by the Secretary of Transportation (49 CFR... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of...

  2. Arianespace streamlines launch procedures

    Science.gov (United States)

    Lenorovitch, Jeffrey M.

    1992-06-01

    Ariane has entered a new operational phase in which launch procedures have been enhanced to reduce the length of launch campaigns, lower mission costs, and increase operational availability/flexibility of the three-stage vehicle. The V50 mission utilized the first vehicle from a 50-launcher production lot ordered by Arianespace, and was the initial flight with a stretched third stage that enhances Ariane's performance. New operational procedures were introduced gradually over more than a year, starting with the V42 launch in January 1991.

  3. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  4. Average-Case Analysis of Algorithms Using Kolmogorov Complexity

    Institute of Scientific and Technical Information of China (English)

    姜涛; 李明

    2000-01-01

    Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

  5. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  6. Fr\\'echet means of curves for signal averaging and application to ECG data analysis

    CERN Document Server

    Bigot, Jérémie

    2011-01-01

    Signal averaging is the process that consists in computing a mean shape from a set of noisy signals. In the presence of geometric variability in time in the data, the usual Euclidean mean of the raw data yields a mean pattern that does not reflect the typical shape of the observed signals. In this setting, it is necessary to use alignment techniques for a precise synchronization of the signals, and then to average the aligned data to obtain a consistent mean shape. In this paper, we study the numerical performances of Fr\\'echet means of curves which are extensions of the usual Euclidean mean to spaces endowed with non-Euclidean metrics. This yields a new algorithm for signal averaging without a reference template. We apply this approach to the estimation of a mean heart cycle from ECG records.

  7. Frozen-density embedding theory with average solvent charge densities from explicit atomistic simulations.

    Science.gov (United States)

    Laktionov, Andrey; Chemineau-Chalaye, Emilie; Wesolowski, Tomasz A

    2016-08-21

    Besides molecular electron densities obtained within the Born-Oppenheimer approximation (ρB(r)) to represent the environment, the ensemble averaged density (〈ρB〉(r)) is also admissible in frozen-density embedding theory (FDET) [Wesolowski, Phys. Rev. A, 2008, 77, 11444]. This makes it possible to introduce an approximation in the evaluation of the solvent effect on quantum mechanical observables consisting of replacing the ensemble averaged observable by the observable evaluated at ensemble averaged ρB(r). This approximation is shown to affect negligibly the solvatochromic shift in the absorption of hydrated acetone. The proposed model provides a continuum type of representation of the solvent, which reflects nevertheless its local structure, and it is to be applied as a post-simulation analysis tool in atomistic level simulations. PMID:26984532

  8. The quiet Sun average Doppler shift of coronal lines up to 2 MK

    CERN Document Server

    Dadashi, Neda; Solanki, Sami K

    2011-01-01

    The average Doppler shift shown by spectral lines formed from the chromosphere to the corona reveals important information on the mass and energy balance of the solar atmosphere, providing an important observational constraint to any models of the solar corona. Previous spectroscopic observations of vacuum ultra-violet (VUV) lines have revealed a persistent average wavelength shift of lines formed at temperatures up to 1 MK. At higher temperatures, the behaviour is still essentially unknown. Here we analyse combined SUMER/SoHO and EIS/Hinode observations of the quiet Sun around disk centre to determine, for the first time, the average Doppler shift of several spectral lines formed between 1 and 2 MK, where the largest part of the quiet coronal emission is formed. The measurements are based on a novel technique applied to EIS spectra to measure the difference in Doppler shift between lines formed at different temperatures. Simultaneous wavelength-calibrated SUMER spectra allow establishing the absolute value a...

  9. Reconstruction of ionization probabilities from spatially averaged data in N-dimensions

    CERN Document Server

    Strohaber, J; Schuessler, H A

    2010-01-01

    We present an analytical inversion technique which can be used to recover ionization probabilities from spatially averaged data in an N-dimensional detection scheme. The solution is given as a power series in intensity. For this reason, we call this technique a multiphoton expansion (MPE). The MPE formalism was verified with an exactly solvable inversion problem in 2D, and probabilities in the postsaturation region, where the intensity-selective scanning approach breaks down, were recovered. In 3D, ionization probabilities of Xe were successfully recovered with MPE from simulated (using the ADK tunneling theory) ion yields. Finally, we tested our approach with intensity-resolved benzene ion yields showing a resonant multiphoton ionization process. By applying MPE to this data (which was artificially averaged) the resonant structure was recovered-suggesting that the resonance in benzene may have been observable in spatially averaged data taken elsewhere.

  10. Equivalent Beam Averaging (EBA) of I-V Spectra for LEED Analysis

    Science.gov (United States)

    Davis, H. L.; Noonan, J. R.

    1983-01-01

    An equivalent beam averaging (EBA) procedure is described which has proved to be very useful to enhance I-V profile data collected for LEED analyses. Specific analyses are documented where application of EBA has led to improved agreement between calculated and experimental I-V profiles. The procedure also has been substantiated by examination of representative I-V profiles which were calculated to correspond to the incident beam slightly misaligned from, and exactly aligned with, the surface normal. It then has been inferred from such substanstiation that use of EBA in a LEED analysis reduces the effects of systematic experimental errors caused by minor misalignment of the incident beam, beam divergence, and certain surface morphologies.

  11. Applied statistics: A review

    OpenAIRE

    Cox, D R

    2007-01-01

    The main phases of applied statistical work are discussed in general terms. The account starts with the clarification of objectives and proceeds through study design, measurement and analysis to interpretation. An attempt is made to extract some general notions.

  12. Applied eye tracking research

    NARCIS (Netherlands)

    Jarodzka, Halszka

    2011-01-01

    Jarodzka, H. (2010, 12 November). Applied eye tracking research. Presentation and Labtour for Vereniging Gewone Leden in oprichting (VGL i.o.), Heerlen, The Netherlands: Open University of the Netherlands.

  13. Applied Mathematics Seminar 1982

    International Nuclear Information System (INIS)

    This report contains the abstracts of the lectures delivered at 1982 Applied Mathematics Seminar of the DPD/LCC/CNPq and Colloquy on Applied Mathematics of LCC/CNPq. The Seminar comprised 36 conferences. Among these, 30 were presented by researchers associated to brazilian institutions, 9 of them to the LCC/CNPq, and the other 6 were given by visiting lecturers according to the following distribution: 4 from the USA, 1 from England and 1 from Venezuela. The 1981 Applied Mathematics Seminar was organized by Leon R. Sinay and Nelson do Valle Silva. The Colloquy on Applied Mathematics was held from october 1982 on, being organized by Ricardo S. Kubrusly and Leon R. Sinay. (Author)

  14. Mesothelioma Applied Research Foundation

    Science.gov (United States)

    ... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs Help us raise awareness and ... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs © 2013 Mesothelioma Applied Research Foundation, ...

  15. Straightening out Legal Procedures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    China’s top legislature mulls giving the green light to class action litigations The long-awaited amendment of China’s Civil Procedure Law has taken a crucial step.On October 28,the Standing Committee of the National People’s Congress(NPC),China’s top legislature,reviewed a draft amendment to the law for the first time.

  16. United States Average Annual Precipitation, 1990-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...

  17. United States Average Annual Precipitation, 1961-1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  18. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  19. On the average exponent of elliptic curves modulo $p$

    CERN Document Server

    Freiberg, Tristan

    2012-01-01

    Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

  20. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  1. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  2. United States Average Annual Precipitation, 1995-1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...

  3. United States Average Annual Precipitation, 2005-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...

  4. United States Average Annual Precipitation, 2000-2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...

  5. United States Average Annual Precipitation, 1990-1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...

  6. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  7. Homogeneous conformal averaging operators on semisimple Lie algebras

    OpenAIRE

    Kolesnikov, Pavel

    2014-01-01

    In this note we show a close relation between the following objects: Classical Yang---Baxter equation (CYBE), conformal algebras (also known as vertex Lie algebras), and averaging operators on Lie algebras. It turns out that the singular part of a solution of CYBE (in the operator form) on a Lie algebra $\\mathfrak g$ determines an averaging operator on the corresponding current conformal algebra $\\mathrm{Cur} \\mathfrak g$. For a finite-dimensional semisimple Lie algebra $\\mathfrak g$, we desc...

  8. Average resonance parameters of zirconium and molybdenum nuclei

    International Nuclear Information System (INIS)

    Full sets of average resonance parameters S0, S1, R0', R1', S1,3/2 for zirconium and molybdenum nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. Analysis of recommended parameters and some of the literary data had been performed also.

  9. Average resonance parameters of ruthenium and palladium nuclei

    International Nuclear Information System (INIS)

    Full sets of the average resonance parameters S0, S1, R0', R1', S1,3/2 for ruthenium and palladium nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. The analysis of recommended parameters and of some of the literary data had been performed also.

  10. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  11. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  12. On the convergence time of asynchronous distributed quantized averaging algorithms

    OpenAIRE

    ZHU, MINGHUI; Martinez, Sonia

    2010-01-01

    We come up with a class of distributed quantized averaging algorithms on asynchronous communication networks with fixed, switching and random topologies. The implementation of these algorithms is subject to the realistic constraint that the communication rate, the memory capacities of agents and the computation precision are finite. The focus of this paper is on the study of the convergence time of the proposed quantized averaging algorithms. By appealing to random walks on graphs, we derive ...

  13. Average life of oxygen vacancies of quartz in sediments

    Institute of Scientific and Technical Information of China (English)

    刁少波; 业渝光

    2002-01-01

    Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.

  14. On the relativistic mass function and averaging in cosmology

    CERN Document Server

    Ostrowski, Jan J; Roukema, Boudewijn F

    2016-01-01

    The general relativistic description of cosmological structure formation is an important challenge from both the theoretical and the numerical point of views. In this paper we present a brief prescription for a general relativistic treatment of structure formation and a resulting mass function on galaxy cluster scales in a highly generic scenario. To obtain this we use an exact scalar averaging scheme together with the relativistic generalization of Zel'dovich's approximation (RZA) that serves as a closure condition for the averaged equations.

  15. Journal of Applied Physics

    OpenAIRE

    Lee, T. K.; Zhang, F. C.

    1984-01-01

    The Goldstone diagrammatic technique developed by Keiter and Kimball for single impurity Anderson model is reformulated. Instead of having the self_energy functions defined on the real axis as the Brillouin_Wigner theory, we have defined the functions on the complex plane. This avoids the complicated and cumbersome regularization procedure required in the Keiter and Kimball formulation. Most important of all it makes the numerical calculations possible. The exact partition function may be wri...

  16. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  17. 20 CFR 638.601 - Applied VST budgeting.

    Science.gov (United States)

    2010-04-01

    ... for the use or nonuse of such funds. The approval of the Job Corps national office is necessary to... Applied VST budgeting. The Job Corps Director shall establish procedures to ensure that center...

  18. Computing Depth-averaged Flows Using Boundary-fitted Coordinates and Staggered Grids

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A depth-averaged nonlinear k-ε model for turbulent flows in complex geometries has been developed in a boundary-fitted coordinate system. The SIMPLEC procedure is used to develop an economical discrete method for staggered grids to analyze flows in a 90° bend. This paper describes how to change a program in rectangular coordinate into a boundary-fitted coordinate. The results compare well with experimental data for flow in a meandering channel showing the efficiency of the model and the discrete method.

  19. Exact Expected Average Precision of the Random Baseline for System Evaluation

    Directory of Open Access Journals (Sweden)

    Bestgen Yves

    2015-04-01

    Full Text Available Average precision (AP is one of the most widely used metrics in information retrieval and natural language processing research. It is usually thought that the expected AP of a system that ranks documents randomly is equal to the proportion of relevant documents in the collection. This paper shows that this value is only approximate, and provides a procedure for efficiently computing the exact value. An analysis of the difference between the approximate and the exact value shows that the discrepancy is large when the collection contains few documents, but becomes very small when it contains at least 600 documents.

  20. A self-organizing power system stabilizer using Fuzzy Auto-Regressive Moving Average (FARMA) model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Y.M.; Moon, U.C. [Seoul National Univ. (Korea, Republic of). Electrical Engineering Dept.; Lee, K.Y. [Pennsylvania State Univ., University Park, PA (United States). Electrical Engineering Dept.

    1996-06-01

    This paper presents a self-organizing power system stabilizer (SOPSS) which use the Fuzzy Auto-Regressive Moving Average (FARMA) model. The control rules and the membership functions of the proposed logic controller are generated automatically without using any plant model. The generated rules are stored in the fuzzy rule space and updated on-line by a self-organizing procedure. To show the effectiveness of the proposed controller, comparison with a conventional controller for one-machine infinite-bus system is presented.